Paste your text or upload a PDF/DOCX. Our detector analyzes every sentence, shows you exactly which parts look AI-generated, and gives you an overall verdict.
Paste your text below (minimum 180 characters) or upload a PDF or DOCX file to analyze.
This is an assistive detection tool, not evidence of plagiarism or AI use. False positives are possible — especially on creative writing, heavily edited text, and non-native English. Use results as one input among many, not as final judgement.
Paste your text into the box below or upload a PDF or DOCX file. Documents under 1 MB are accepted. Minimum length is 180 characters so the detector has enough context to produce a reliable verdict.
When the analysis finishes, you see three views: a doc-level verdict card, a pie-chart distribution of AI-flagged sentences, and a sentence-by-sentence diff that highlights exactly which parts look AI-generated. You can save the whole report as a PDF from the “Save as PDF” button in the results header.
Our detector is powered by ModernBERT — a fine-tuned 149-million-parameter transformer that reads your text and predicts, sentence by sentence, whether it was generated by a large language model like GPT, Claude, Gemini, Llama, DeepSeek, or Mistral.
The model was trained on a balanced corpus of 1,200 samples — 600 written by humans (PAN25 + PERSUADE essays) and 600 generated by modern LLMs. Training used focal loss with hard-negative mining to handle ambiguous text, and the decision threshold was calibrated on a held-out validation split.
For each sentence we compute an AI probability between 0% and 100%. Sentences above 50% are marked red (likely AI), 25–49% amber (possibly AI), below 25% green (likely human). The document verdict is the mean across all analysis windows.
Most AI detectors give you one opaque number. We give you the full picture: a sentence-level heatmap, a distribution pie chart, and a diff-style view showing exactly which sentences look AI-generated and why.
Our published accuracy is AUC-ROC 0.9884 on a 1,000-sample validation set covering 22 generators including GPT-4, GPT-4o, GPT-5, Claude 4, Gemini 2, Llama 3, DeepSeek, Qwen, and Mistral. At a 50% threshold: zero false positives on the test set, 60% recall.
No third-party API dependency. No training on your submissions. Every check runs on GPU infrastructure we own and operate — no SaaS passthrough margin, no ToS risk, no vendor lock-in.
Download a free demo or purchase a license to start checking for plagiarism and AI-generated content.
Our training corpus includes samples from the major frontier models across vendors, so the detector generalises across the current LLM landscape.
OpenAI: GPT-3.5, GPT-4, GPT-4 Turbo, GPT-4o, GPT-5.0, GPT-5.3, GPT-5.4. Anthropic: Claude 3 Opus, Claude 3.5 Sonnet, Claude 4 Opus, Claude 4.5 Sonnet. Google: Gemini 1.5 Pro, Gemini 2.0, Gemini 2.5. Meta: Llama 3.1, Llama 3.3. Others: Qwen 2.5, Qwen 3, DeepSeek R1, Mistral Large, o3-mini, and more.
AI detection is probabilistic, not evidentiary. Treat a high AI score as a signal to investigate further, not as proof. False positives are possible — especially on creative writing, heavily edited text, and non-native English.
Shorter text (under 100 words) is harder to classify reliably. Paraphrased or heavily edited AI text can evade detection. Essays that went through a humaniser tool may score as human even when they were originally generated.
Creative writing and literary analysis are the weakest domain for any AI detector — human style in fiction can converge with LLM outputs. If your use case is academic integrity, pair this tool with source-matching plagiarism detection (see our desktop product below).