I Got Banned for Writing My Own Story
Last week I posted a personal reflection to a subreddit I had been part of for years. A moderator flagged it as “AI slop” within minutes, banned me, and moved on. The moderator ran an AI detector, got a score, and treated the score as proof. I had written the post myself.
I had used AI to help shape the draft. I wrote the rough version, then asked an AI to tighten the structure and challenge weak arguments. Think of asking a colleague to read your work and say “this paragraph buries the point.” The detector returned a positive result. The moderator did not read the post.
That experience made me think.
Detection answers the wrong question
AI detectors measure whether text patterns match model output. They do not measure whether you thought through the piece, struggled with the structure, decided what to keep and what to cut.
My process: I write a rough draft. I iterate with AI. Some ideas do not survive, which is the point. A bot farm generating thousands of posts skips that whole process. The detector returns the same score for both. A moderator who trusts the score instead of reading the post chooses speed over accuracy.
The subreddit guidelines said nothing about AI-assisted writing. The moderator enforced a rule that did not exist, and that part bothered me more than the ban.
Platform incentives
Reddit and Stack Overflow sit on enormous archives of human-generated text. AI companies pay for that data now. I started wondering whether the platforms have a financial reason to over-enforce. Filtering AI content protects the commercial value of their corpus alongside the quality of discussion.
Over-enforcement catches real writers, and you cannot see that cost on a balance sheet. A cleaner corpus to license is concrete and sellable. I do not know if anyone at these companies has made this calculation on purpose. I might be wrong about this. But the financial incentives and the enforcement patterns point in the same direction, and that makes me uneasy.
A song that landed differently
I was driving home after picking up my son from kindergarten, still thinking about all of this, when “A Real Hero” by College and Electric Youth came on the radio. If you have seen Drive (2011), you know the track.
There is a passage about going against the grain of dystopic claims, about being judged by actions rather than thoughts. It hit differently that afternoon. An algorithm had judged me for something I wrote. A pattern-matching system had produced a score about me, and I had no way to respond to it.
A subreddit ban is nothing in the big picture. But it hits different when you are telling your own human story. You hit that wall and it makes you wonder.
The timestamp problem
Generated content has spread across the internet since 2023. Researchers working with language data may need to split their corpus at that boundary. Peer-reviewed journals have editorial review. Major outlets have bylines and editorial boards. Reddit has volunteer moderators with detector plugins. I have not seen anyone audit the false positive rate.
I would like to see hard numbers on how generated content has spread across different domains.
Rethinking what “mine” means
I have been writing for a long time. I have a master’s thesis, a PhD, and published papers, all from before this technology existed. I used to think that track record spoke for itself.
It does not. A detector reads statistical patterns. It cannot read intent or effort. My process of iterating with AI produces text that, to a detector, looks the same as text from someone who typed “write me a blog post about X” and hit enter.
The ban forced me to think about what I mean when I call something “mine.” My answer, for now: the thinking has to come first. I write a rough draft that captures what I think. I use AI to tighten it. I do the deciding. The phrasing got help.
I am less certain about where to draw that line than I was a week ago. The ban made me examine it more carefully. That is the one useful thing that came out of getting kicked from a subreddit.
Writing this down
I am documenting these opinions because I expect them to change. People form norms around AI-assisted writing in real time. What feels right today may look naive in a year. I would rather write this down and revisit it than wait for a consensus that may never settle.
Written by a human, refined with AI.