About Text Workbench
Version 0.4.0
What this is
Text Workbench is a set of writing analysis tools built on classical natural language processing — no AI, no neural networks, no black boxes. Every score you see is the direct result of measurable, explainable properties of your text.
The free tools are designed for writers, editors, students, and content creators who want honest feedback on their writing — not automated rewrites. We show you what the text is doing and let you decide what to do about it.
The philosophy
Most "AI detector" tools are themselves AI models — probabilistic classifiers trained to guess whether text came from a language model. They work somewhat, but they also produce false positives, struggle with newer models, and can't tell you why they flagged something.
We take a different approach. Instead of predicting AI probability, we measure specific linguistic properties that tend to differ between human and AI-generated writing. The result isn't a detection score — it's a risk assessment: a way of saying "this text has patterns that are common in AI-generated writing, and here's which patterns."
That distinction matters. A high score doesn't mean your text was written by AI. It means your text shares structural and linguistic properties with AI output — properties that, if addressed, will generally make the writing stronger regardless of its origin.
What we measure
Our tools analyze writing across several independent dimensions. Each tool focuses on one; the AI Risk Checker combines them into a composite score.
Structural variety
Human writers naturally vary their sentence rhythm — short punchy sentences after long complex ones, abrupt paragraph breaks, mid-thought asides. AI-generated text tends toward a more metronomic cadence. We measure how much your text deviates from structural uniformity.
Language specificity
Good writing is concrete. Vague quantifiers, abstract nouns, and weak action verbs are common in AI output because language models tend to generalize. We flag the specific words and constructions that signal abstraction over specificity.
Phrase originality
Certain phrases are so overused — in business writing, in AI output, in both — that they've stopped carrying meaning. We maintain a curated list of these constructions and flag them when they appear, along with suggestions for what you might say instead.
Voice and agency
Passive constructions and hedging language obscure who is doing what. AI models, trained on enormous amounts of formal writing, tend toward passive voice and qualified statements. We measure both the density of passive constructions and the presence of hedging patterns.
Readability
Using established readability formulas, we score text for its expected reading difficulty and flag sentences that are likely to cause readers to slow down or re-read.
What we don't claim
- —We do not claim to definitively identify AI-generated text. No heuristic system can.
- —A low score does not certify that text is human-written. A high score does not prove it was generated by AI.
- —Our phrase lists and heuristics reflect patterns observed in current AI models. As models change, patterns change.
- —Modern AI models (2024 and later) have been trained to write more naturally. They vary sentence length, avoid clichés, and mimic personal voice — which means well-crafted AI output will score lower than older, more formulaic AI text. A Moderate score on recent AI writing is expected, not a failure of the tool.
- —These tools are aids for revision, not verdicts.
The paid tier
The free tools show you the problems. Text Workbench Pro uses AI to help you fix them — automatically rewriting flagged sentences in your voice, not replacing your writing with generic output. The goal is a document that reads as distinctly yours.
Learn about Pro →