Working on a personal checklist for reading AI-generated text by eye, without running anything through a tool. Partly because tools have false positive problems I’ve mentioned before in other threads, partly because I want to understand the underlying signals rather than outsourcing the judgment.
What I have so far, based on reading a lot of AI output:
- Transitions that are too smooth. Human writing has more friction, more abrupt shifts. AI tends to connect everything with confident bridge phrases.
- Closing sentences that summarize and “resolve” the paragraph. Human writers often end on something unresolved, a question, a contradiction, something the reader has to sit with.
- Balanced “on the other hand” structures that present multiple perspectives without actually committing to any of them.
- Vague attribution. “Studies show,” “experts agree,” “research suggests” with no specific source.
- The word “delve.” I don’t know a single human writer who uses this word regularly.
What am I missing? Specifically interested in signals that are reliable across content types, not just student essays. Things that hold up in professional copywriting, journalism, long-form editorial content.