In the vast ocean of AI-generated content, it’s becoming increasingly difficult to distinguish what’s original and what’s not.

The truth is that in order to detect whether a piece of content is AI-written or not, you need AI!

As a hands-on user, I’ve put Originality AI to the test. I explored its capabilities, drew comparisons with competitor tools, and delved into how it serves bloggers like us.

And let me tell you, this tool is pretty impressive.

It definitely has the potential to reshape the way we look at original content.

I’ll share with you 5 vital tests that reveal just how effective this tool is.

Trust me, you won’t want to miss it!

Price$14.95 per month
Free trialGet 50 credits with its free AI detection Chrome Extension.
1 credit can scan 100 words.
FeaturesAI content detection
Plagiarism detection
Content at Scale

👍 pros (What I like about

Upon my first hand engagement with Originality AI‘s suite, here’s my breakdown of its strengths:

  • All-in-one tool: It combines an AI-written content detector, plagiarism scanner, and readability tools in one neat package. For content managers, it’s a trifecta!
  • Versatility: Originality’s 2.0 model can detect even GPT-4 and ChatGPT-generated content, including paraphrased and hybrid content. Its false positives have decreased over time, showing improving precision.
  • Flexible AI detection: Their content scanning system is a standout. They have color-coded markers estimate the ‘likelihood’ of a text being AI-written, offering a more nuanced approach than rigid, absolute markers..
  • Content creation tracking: The ‘Watch a Writer Write’ Chrome extension monitors each keystroke in Google Docs, recording edits and deletions. While not foolproof against manual input of AI-written content, it does lend an extra layer of scrutiny, syncing Originality results with document history.
  • Comprehensive reporting: You receive a detailed originality score report with writer information and writing session history. While these details strengthen the stats, they aren’t foolproof.
  • API access: With API access, Originality AI empowers publishers to weave AI-content detection into their native systems—a plus for large content teams and agencies.
  • Team and organizational features: Originality AI is a tool for everyone—from freelancers to content teams. You can add team members to your workspace and have everyone working on the same dashboard. Plus, you can tag your content scans to organize them by projects, clients, or industries.

All these facets make a valuable addition to your content editing arsenal for maintaining quality control and minimizing instances of plagiarism and AI-generated content.

👎 cons (What I don’t like about

No tool is perfect and Originality AI isn’t an exception.

Here are a few pain points I observed during my evaluation:

  • Sensitivity: Since their 1.4 model update, the tool leans more towards marking text as AI-generated, sometimes over genuine human content. The increased sensitivity results in a higher rate of false positives (contrary to their claim). I noticed even slight formatting changes can cause a document to be flagged as AI. Jokingly, it seems would even flag the Bible as AI!
  • Guesswork vs. evidence: Current AI detectors, like, aren’t always spot-on with accuracy. It’s challenging to accept their results as foolproof. For example, classifying a piece of work as AI-generated isn’t always accurate, as these tools lack a reliable detection system that can determine content origin with relative precision. A recent study even suggests that detecting AI-generated text is mathematically impossible.
  • Contextual accuracy: I found Originality AI’s accuracy inconsistent across different content types. The tool works best with blog posts or online articles but struggles with academic pieces like scientific summaries or college essays. This is likely because Originality AI mainly trains on web content.
  • GPT-exclusive training: Originality AI is trained mainly on GPT, so it’s great at spotting GPT content but struggles with newer Large Language Models (LLMs). The question arises—how will it adapt to ever-evolving LLMs like Claude, Bing, etc.? If it doesn’t adapt, its ability to detect all AI-generated content could be limited since not all AI content comes from GPT.

Upfront bottomline certainly packs a punch when it comes to accuracy:

With its 2.0 model, it accurately identified ChatGPT to Human content 98.8% of the time.

However, there’s a caveat:

The tool has bias and sensitivity issues that raise reliability questions.

The unique selling propositions are:

  • The ‘Watch a Writer Write’ Chrome extension for tracking content creation.
  • Effective GPT-4 detection capability.
  • Website Scanner for bulk detection.

When it comes to advanced features, doesn’t fall short:

  • It offers a readability checker and an in-built plagiarism detection tool, making it a valuable asset for quality control and content audits.

Best for?

  • Content agencies, managers, bloggers, and online publishers. They can leverage for maintaining high-quality content and streamlining their editing process. overview

In November 2022, John Gillham saw a gap in the plagiarism checker market and decided to fill it by launching Originality.AI. Instead of creating just another run-of-the-mill plagiarism detection tool, he aimed higher.

Personally, I find Originality.AI a fresh take on plagiarism detection, blending the practical with the innovative. With features like scan history, detection scores, and shareable results, it’s evident that this tool isn’t just for solo writers but for entire teams.

At the outset, is a premium AI content detection tool specialized in recognizing content produced by LLMs like GPT-4, GPT-3, GPT-2, GPT-NEO, ChatGPT, and GPT-J.

However, here’s an interesting theory: I think started as a plagiarism-checking solution and coincidentally caught up with the AI content wave that began in 2022.

What makes it stand out from typical plagiarism checkers? The tool’s aptitude to also pinpoint duplicate content, bridging the gap between AI content and human-written works. Lucky for them, they found a unique niche.

How does Originality AI work? employs a ‘Fine Tuning AI Model Approach’ using supervised learning with a fine-tuned AI language model, including a modified BERT model.

The process involves:

  • Training on patterns: Feeding millions of records of known AI and human content to the model, teaching it to recognize distinct patterns.
  • Testing & evaluation: After each training, a large dataset is used to gauge if the new model is an improvement.

The approach helps distinguish AI-written text from human content, giving content editors a tool to fight machine-generated content spam.

It’s evident that they follow the principle of continuous improvement which is why they keep on building and testing new models. At the time of writing this post, has already released three models.

What types of AI content can detect?

Here’s a quick rundown of Originality AI’s AI content detection capabilities and limitations:

  • Model compatibility: Only detects GPT content.
  • 100% AI: Detects AI-generated content from scratch.
  • AI paraphrasing: Can easily detect AI-written content paraphrased using paraphrasing tools like Quillbot.
  • Human + AI detection: Handles hybrid or human-edited content.
  • Language limitation: Only detects content written in English, a noticeable limitation in the entire process.

How accurate is

Based on the results of the massive testing guide recently posted on their website, I can segment Originality AI’s accuracy into 4 categories:

  • Sharp: Flags duplicated content reliably
  • Smart: Catches cleverly paraphrased content
  • Sensitive: Detects AI-written content
  • Secure: Gives original human-written content a clean chit

Not perfect, but it’s fair to say gets an “A” for accuracy. It spots duplicated and AI-generated content, letting your authentic, human-written work shine.

But here’s a word of caution. The stats above are intriguing, sure, but don’t let them paint the whole picture. Just as with any other emerging technology, there’s a lack of extensive research on the efficacy of AI content detection. Therefore, it’s a story that’s still developing. Numbers, while giving us a quick impression, can sometimes be misleading. I’d suggest adopting a practical approach. Use as a tool to aid your content quality assurance efforts, not as the be-all and end-all. Features

Here’s a brief overview of each feature offered by

  • AI scan: This tool quickly scans content, giving scores for originality. Using advanced algorithms, it helps distinguish between AI and human-written text.
  • AI content detection scores: Picture this as a weather report, where the originality score reflects the likelihood of a text being AI-generated. For instance, a 70% AI score suggests that there’s a 70% ‘probability’ of the content being AI-produced.
  • Plagiarism scan: Besides the AI Scan, there’s a plagiarism checker to make sure your content isn’t accidentally using someone else’s words.
  • Readability scan: To make sure your content is easy to read, offers a readability scan. It uses the Flesch-Kincaid Reading Ease formula, assessing text by examining average sentence length and syllables per word. This helps gauge how reader-friendly your content truly is.
  • Full site scans and scan from URL: If you need a quick check of an actual website version, these features allow you to scan an entire website or a specific URL. This is handy for larger content audits.
  • Specialty: With a specific focus on content writers, excels in detecting web content, even paraphrased content, making it an indispensable tool in any online publisher’s toolkit. Chrome Extension Chrome Extension

The Chrome Extension enhances the platform’s functionalities.

If you’re a Google Docs user, with just one click, you can run an AI scan right from your GDocs dashboard. And once the scan is complete, you can view the overview immediately.

You can view the ‘Originality Report’ for an in-depth AI score breakdown. It shows AI influence in your content using color-coded highlights for each sentence.

Also, with the browser extension, there’s a tool named ‘Watch the Writers Write.’ It scores AI likeness in real-time as you write, giving screen-recorded feedback on your manual inputs.

Test cases and results

In assessing, I’ve conducted an exhaustive testing regimen to measure its mettle.

But that’s not all. The true litmus test lies in pitting against other top-tier products in the domain. More on that soon.

For the aficionados out there, I’ve detailed my testing content, giving you a transparent view of the entire process. Let’s dive right in.

Test 1: Using 100% Al-written content

For this test, I just prompted ChatGPT (GPT-4) to produce a short write-up on “How To Manage Your Finances in 20s.” I also added an additional command to make the output sound conversational and engaging to add some personality to the piece. This prevents the output from sounding robotic.

I ran the output on Originality.AI to view the results. Here’s what I saw:

Testing Al-written content with

As expected, Originality.AI, using its latest model (2.0), flagged this piece as 100% likely to be AI-generated.

Click here to access the full report of this scan.

Test 2: Random content from our previous blog post at BloggingX

For this test, I selected one of our older posts titled “Top 7 Ways to Make Money Travel Blogging.” This post was published on March 13, 2019, when AI writing tools weren’t a thing.

I randomly picked a section and ran it on Originality AI. Here’s what I saw:

Testing random content from blog post at BloggingX using

Surprisingly enough, Originality.AI was spot on in labeling this piece of content as 100% likely to be human-written.

Click here to access the full report of this scan.

Test 3: AI content + Quillbot paraphrasing

As the title suggests, I picked the same sample text I used for Test #1 and paraphrased it using Quillbot.

Upon running the output on Originality.AI, Here’s what I got:

Testing AI content and Quillbot paraphrasing

Damn, here we go again! Again, Originality.AI was able to correctly classify this text as 100% likely to be AI-generated. And why not? After all, Quillbot also uses AI to spin the text to generate multiple variations aka paraphrasing.

Test 4: Al content + some clever prompt engineering

For this test, here’s what I did:

  • Picked the same 100% AI-generated text that I used for Test #1 and did some prompt magic on it to humanize it.
  • Ran the content on Originality.AI

Here’s the result:

Testing Al content and some clever prompt engineering

This is where things get interesting. This time, Originality.AI flagged this 100% AI-generated text as 40% likely to be human-written.

The reason why this is an interesting finding is simple: I did not edit this text even one bit. All I did was run a new command to alter the output in a way that it resembles a human. That’s it!

From the results, it seems the text successfully tricked Originality.AI into believing it had some human touch. Interesting, right?

Click here to view the full report of this scan.

Test 5: Plagiarism

To perform this test, I picked up a section from the same blog post we used in Test #2 and added a few pieces of unique text.

I aimed to test how accurately Originality.AI detects plagiarism by checking the following:

  • If it can spot 100% copied content correctly.
  • How well it identifies and separates unique text in results.

Upon running the plagiarism scan, here’s what I got:

Testing plagiarism with

This was shocking, to say the least. The sample I used for this test had a mix of unique and duplicate content, and Originality.AI identified it as 100% unique!

To be sure, I gave this another try, and this time, I used one of the recently published articles from Forbes to mitigate the indexing uncertainty. Plus, I copied the entire post and ran it through Originality.AI’s plagiarism scanner.

Here are the results:

Plagiarised content identified with

Okay, this time, Originilaity.AI’s plagiarism scan results were on point.

Test results summary

Diving straight into the results of my hands-on testing, it’s clear that has proven its mettle in several key areas:

  • Test 1 (100% AI-written content): accurately identified the text as AI-generated. This assures that the tool can discern between human and AI-produced content (GPT only).
  • Test 2 (Human-written content): Again, came out strong, correctly classifying the content as 100% human-generated. A promising sign of its reliability.
  • Test 3 (AI content + Quillbot paraphrasing): Even with paraphrased AI content, was able to spot the machine’s touch. This is invaluable for maintaining content originality and authenticity.

But, it wasn’t all smooth sailing:

  • Test 4 (AI content + clever prompt engineering): The tool marked the AI-generated content as 40% likely human-written, indicating a possible area of refinement – distinguishing sophisticated AI outputs engineered to sound human-like.

As for plagiarism detection:

  • Test 5 (Plagiarism): A mixed performance. While initially missed identifying a mix of unique and duplicate content, it later correctly spotted a copied Forbes article. This indicates a certain proficiency in plagiarism detection but also points towards a need for more rigorous, varied testing. detection engines

In my experience of using, I found a clear progression in the detection abilities of its different detection engines. Each successive model appeared to build on the strengths of its predecessor while mitigating its weaknesses.

  • 1.1 model was relatively easier to trick, particularly through paraphrasing and using AI models like GPT-4/ChatGPT. It still had its strong points, but some clever engineering could find its blind spots.
  • 1.4 model introduced a significant leap in detection capabilities. It was much harder to deceive, standing its ground against complex AI-generated content.
  • 2.0 model, the latest, brought even more robustness to the table. Not only was it harder to trick, but it also noticeably reduced the rate of false positives.

This gradual improvement in’s detection abilities is no coincidence. It’s a testament to their “Fine Tuning AI Model Approach.” They are committed to continuous improvement, routinely training their detection models on newer datasets to counter the latest advancements in AI writing technology.

This means that as LLMs become more sophisticated, so too does’s ability to discern AI-generated content. At least, in theory, that seems to be the case.

Originality.AI pricing

Originality.AI pricing’s pricing model has seen some changes over time. Initially, they operated on a Pay-As-You-Go pricing model, charging users per token used.

However, recently, they introduced a new pricing structure: a monthly subscription model. For $14.95/mo, you get 3000 credits and full access to the platform’s features.

Early subscribers have not been left out of these changes. They’ve been shifted to what refers to as a ‘grandfathered plan,’ which means their billing mode remains unchanged. Additionally, they will continue to have access to the platform’s latest features without paying extra.

FAQs on Originality.AI

What is a good score on Originality AI?

Originality AI uses color-coded scoring to indicate the likelihood of content being human-generated. Anything over 70%, indicated in green, is considered a good score, implying that the content likely originated from a human writer.

Can I use Originality AI for free?

Originality AI is not a free service. To use its features, you must purchase their credits. The minimum package costs $30 for 3000 credits, and any additional credits are billed at $0.01 per credit.

How much does Originality AI cost?

Originality AI has recently shifted from a Pay-As-You-Go model to a monthly subscription plan. Users can now access all features for a flat fee of $14.95 per month.

How accurate is Originality AI?

Compared to other AI content detectors, Originality AI demonstrates high accuracy. However, it tends to lean towards classifying text as AI-generated, especially when the context is complex or ambiguous.


Overall, I think Originality AI is a robust tool designed for bloggers and content managers looking to add an extra layer of quality assurance to their content. Some key considerations:

  • Ideal for bloggers and publishers. It adds an additional layer of assurance for content originality.
  • Don’t take it as an authority. An average score approach is recommended. For example, consider a threshold of 70-80% human-like content across multiple writer submissions rather than aiming for a strict 99%.
  • Not recommended for educators. It struggles with classifying academic text accurately.

In essence, while not perfect, Originality AI is a worthwhile addition to your content management toolkit.