There’s a new collective paranoia among people who write for the internet: the fear that their text “sounds like ChatGPT.”
Writers purging words like “delve” from their vocabulary. Brands requesting that all em dashes be removed from their content. Professors convinced that a 2019 academic paper was written by AI. People using backslashes and odd punctuation “to look more human.”
Welcome to the anti-AI writing movement.
But before you throw away your dictionary and start inserting intentional errors to “sound human,” let’s examine whether this paranoia has merit—or whether we’re falling into something worse than the original problem.
Why people are writing this way now
The root is simple: nobody wants to be mistaken for a machine.
And the logic makes sense. A 2025 Max Planck Institute study showed that words like delve, robust, and pivotal increased their usage by over 50% in published articles since ChatGPT’s launch. The signals exist. The patterns are real.
From there, an entire industry of vocabulary blacklists was born:
- Banned verbs: delve, leverage, optimize, endeavour
- Canceled transitions: furthermore, moreover, additionally, in conclusion
- Vetoed phrases: “unlock the potential of,” “harness the power of,” “in today’s ever-evolving landscape”
Tom Orbach published an “anti-AI cheat sheet” that went viral. Blake Stockton created a 101-part series on patterns to avoid. There are entire newsletters dedicated to cataloging “the 500 most common ChatGPT words.”
The message is clear: if you use these words, you’re suspicious.
Meanwhile, the “tactile rebellion” has spread beyond writing. Designers in 2026 are intentionally scribbling, scanning, and distorting typography through photocopying techniques—all to signal “a human made this.” The fear of looking machine-made has become a cross-industry phenomenon.
Does it actually make a difference?
Here’s where things get interesting.
John R. Gallagher, an English professor at the University of Illinois, described what he calls “the mental tyranny of AI writing”: a hypersensitivity where you start seeing AI everywhere. He himself was convinced an academic article had been written by ChatGPT… until he verified it was published in 2019.
The problem isn’t the words. It’s the paranoia.
Let’s think about what actually distinguishes AI-generated text from human writing:
- It’s not individual words. “Furthermore” isn’t an AI signal. It’s a transition that’s existed for centuries.
- It’s the combination. Perfect grammar + zero personality + generic insights + uniform sentence lengths = that’s what gives it away.
- It’s the absence, not the presence. What’s missing—personal experience, specificity, real opinion, natural imperfections—is what reveals the machine.
As Slate argued in “ChatGPT Shaming Is Making Our Writing So Much Worse”: an inherent suspicion of good writing is probably anathema to producing good writing.
Brands are requesting that em dashes be removed from all their content. Em dashes. A punctuation mark that’s existed since the 18th century. Because “it looks like AI.”
That’s not strategy. That’s superstition.
Do people actually care?
The short answer: yes, but less than you think. And it depends enormously on context.
Recent research shows contradictory data:
What the studies say:
- Readers rate AI-generated content as similar quality to human writing when they don’t know its origin
- But when told it was AI-generated, they lower their perception of credibility and authenticity
- In certain contexts (technical content, educational guides), tolerance for AI is much higher
- In personal, literary, or opinion-based content, resistance remains high
What this means in practice:
- If nobody knows you used AI, they probably won’t notice the difference
- If they suspect you used AI, they’ll judge you harsher than you deserve
- The type of content matters more than the words you use
A data point that should give you pause: readers are becoming less able to distinguish AI text from human text without being explicitly told. People who say “I can always tell” are probably fooling themselves.
TIME magazine recently noted that “looks like AI” has become the internet’s new favorite insult—a catch-all accusation that says more about the accuser’s anxiety than the quality of the work.
Is it really that bad to “sound like AI”?
Let’s ask an uncomfortable question: what does “sounding like AI” actually mean?
It means your writing is:
- Clear and well-structured
- Grammatically correct
- With logical transitions
- Professional and polished
Since when is that a flaw?
The real problem was never clarity or correctness. The problem is when a text is all of that and nothing more. When there’s no mind behind it. When you could swap out the company name and the article would work for anyone.
But that’s not fixed by removing “furthermore” from your vocabulary. It’s fixed by having something to say.
How they’re doing it (and why most fail)
The anti-AI writing movement has split into two camps:
The form-focused camp
- Eliminating words from blacklists
- Using “quirky” punctuation (backslashes, random capitalization)
- Inserting intentional grammatical errors
- Avoiding lists and organized structures
- Writing in choppy sentences. Like this. To. Sound. Human.
This is the equivalent of putting on a fake mustache so people don’t recognize you. Superficial, easy to spot, and frankly, a bit ridiculous.
As The Sociological Review put it in their piece “War of the Words”: writers are now “imaging readers’ future assessments of what they are about to type,” and the fear of being considered a bot is driving the way they approach their creative lives. That’s letting the algorithm into the room before you’ve even started writing.
The substance-focused camp
- Including specific personal experiences
- Offering real opinions (even controversial ones)
- Writing with the natural imperfections of human thought
- Adding context that only someone with real experience would have
- Having a point of view that isn’t “what anyone would say”
Guess which one works.
What we can actually learn from this
The anti-AI phenomenon tells us something important about writing in 2026, but probably not what you think.
1. Homogenization existed before AI
Let’s be honest: generic marketing content sounded the same before ChatGPT. The same buzzwords, the same structures, the same recycled “insights.” AI only amplified a problem that was already endemic.
If your writing “sounds like AI,” maybe the problem isn’t that AI copied you. It’s that you were both drawing from the same well of clichés.
2. “Anti-AI” writing can be equally formulaic
There’s a delicious irony in the anti-AI movement: it has become as predictable as what it criticizes.
3 Quarks Daily identified a concept called “algorithmic interpellation”: acts of resistance against AI end up being defined by the same algorithmic structures they oppose. What AI looks like becomes what must be avoided, and the form AI takes ends up also defining the attempts to oppose it.
In other words: if everyone follows the same “anti-AI cheat sheet,” they end up sounding alike. Just in a different way.
Some designers have taken this irony to its logical extreme—first generating content with AI, then deliberately degrading it through analog processes to achieve a “human” aesthetic. The machine is being programmed to produce the very defects it was initially meant to eliminate.
3. What you actually want is differentiation, not anti-AI
The fear of sounding like AI is really the fear of sounding like everyone else. And that fear is legitimate.
A study on the “Artificial Hivemind” found that when 25 different language models generated metaphors about time, only two dominant clusters emerged: “time is a river” and “time is a weaver.” Despite the variety of models.
That should concern you. Not because you use ChatGPT, but because creative convergence is real—with or without AI. As more people use AI tools, the tendency for all content to sound similar doesn’t just stifle individuality. It could flatten the entire landscape of voices into boring uniformity.
So, should you or shouldn’t you write “anti-AI”?
My position, and I understand many won’t share it:
Don’t focus on not sounding like AI. Focus on sounding like you.
The difference is enormous:
| Anti-AI approach | Own-voice approach |
|---|---|
| Eliminates words from a list | Develops a personal vocabulary |
| Avoids “suspicious” structures | Uses whatever structure serves the message |
| Inserts artificial imperfections | Accepts natural imperfections |
| Reacts to fear | Builds from intention |
| Defined by what it’s not | Defined by what it is |
The rules worth following aren’t “don’t use delve” but:
- Say something specific. “Our conversion rate jumped 23% when we stopped using generic landing pages” beats “landing pages are important for conversion” every time.
- Have a real opinion. If you can replace your brand name with any other and the text still works, you don’t have a voice.
- Include what only you know. Concrete experience is the only real antidote to genericness—human or artificial.
- Don’t self-censor. If “furthermore” is the best transition for your idea, use it. Don’t let a banned-words list impoverish your writing.
The uncomfortable conclusion
The real threat isn’t sounding like AI. It’s that the fear of sounding like AI makes you write worse.
Every time you remove a perfectly valid word from your vocabulary because “it sounds like ChatGPT,” you’re letting a machine define the boundaries of your expression. You’re ceding linguistic territory to an algorithm.
And that, ironically, is letting AI win.
Write well. Write with substance. Write with your voice. And if someone thinks that sounds like AI, that’s their problem—not yours.
Does your content need its own voice instead of a banned-words checklist? At Mazkara Studio, we design content strategies that sound like your brand, not like a prompt. Let’s talk.