Despite what assorted charlatans (especially ones selling tools) would have you believe, it's not actually possible to 'detect' whether or not content was written by an LLM or by a human. What such tools actually detect is "to what degree does this particular content have markers that are common in LLM-generated content?"
Usually this has a lot to do with whether the content has "burstiness" and "perplexity" (the more of each, the more "human"). So if you want to get into the business of tricking LLM detectors, you simply make content more bursty/perplexing to make it more "human" and less so to make it less "human," however you originally generated it.
An interesting conundrum that this creates is the fact that "humanizing" content tends to violate common brand style guides and "LLM-ing" it tends to conform. Run things like Grammarly on your prose and it takes the liberty of removing as much bursting and perplexing as possible, making you sound like a polished robot. Insert some sentence fragments. and smoe spelling errers and you sound every bit a sloppy human being, to the chagrin of content managers everywhere.
The world of brand voice, editorial tools, and SEO content is a world in which, ironically, you're ordered not to use LLMs to generate content and praised for generating content with the markers of LLM-generation.
Setting aside the ethical issues with using LLMs to generate content and representing it as original (i.e. plagiarism) and the fact that, if you're a Hit Subscribe author, you sign an agreement not to do this, we're left with a simple pragmatic concern. We want to create content that optically makes sense on a brand blog, but that doesn't read like you said, "GPT-Alexa, write a blog post about DevOps."
Or, as a client contact once expressed more to the point, "I need to be able to answer plausibly when my CEO asks why we can't just generate this ourselves with ChatGPT."
And that's the real charter here with what I'm explaining. We want to lay out a buffet of things that you, as an author can do, that furnish an answer to that simple question. We want to establish "human markers" that let the reader know they're reading something that they couldn't just get a LLM tool to spit out with a simple prompt -- that more went into it than that.
As you read this, it's worth bearing in mind that nobody is asking you to do ALL (or even, exactly, any) of these things. Read this more as "the more of these things you do in your writing, the more obvious you're making it that human thought and effort went into it."
The Markers
So, let's talk human markers.
- Start with a journalistic lede. This could just be a catchy hook or simply the kind of temporal thing that an LLM simply couldn't do. While this does risk dating an article, that can always be edited later on refresh and it packs an initial wallop in making a first (human) impression.
- Cite and link to authoritative statistics. ChatGPT and its brethren simply fabricate their own, so being precise about this is a human tell.
- Include specifc, real world examples, especially from your own personal experience. LLMs tend to be generic and hand-wavy about specifics.
- Include verbatim verifiable quotes from authoritative sources. This exudes journalistic integrity and effort, since LLM-content creators would be smart to stay away from trying to do this, what with the hallucinations.
- Include references to current events and trends. Unless specifically prompted and carefully curated, it's unlikely you'd see this from LLM content, but it's quite likely you'd do this naturally as a human.
- Avoid using a formal, passive tone, unless the piece requires it. Doing this makes your content read like you prompted an LLM, read the results, and told it to be more serious or something.
- Avoid saying LLM-y stuff, like "in today's fast-paced blah, blah." LLMs love to say stuff like that because the red giant-sized mass of mediocre blog posts they were trained on all say stuff like that. Don't vomit banal filler into your posts to introduce them.
- Add personality and write with your voice, assuming and inasmuch as its brand appropriate. For instance, ChatGPT would never dare attempt to simulate my rapier wit, and it would especially never go meta the way I am doing right now, as you're reading this.
- Admit when you're unsure of stuff and hedge as appropriate. LLMs default to being confidently wrong. Or maybe they don't. But probably they do. So throw in some hedges and uncertainty.
- Use figurative language, ideally even novelly so. Even if your use of simile sucks as bad as the river tide (original credit: the Onion), LLMs don't default to figurative language and they certainly won't say something they've never encountered. So like my toddler jumping into the pool and submerging his head for the first time yesterday, take a whack at the unknown, even if you're not sure.
- Be funny or sarcastic if you can pull it off and it's context and style-guide appropriate. But you're on your own here; I can't pull it off.
- Refer to error, learning, and growth in the past. If you used to do something incorrectly and you've learned and fixed it, talk about that.
- Drop popular culture references, if context appropriate. Like Data in Star Trek, LLMs will probably do this awkwardly at best if you prompt them to, and if you're working this hard to plagiarize content, hey, maybe try putting all that effort to just writing it.
- Make novel observations or insights. If at all possible, see if you can coin things that haven't been expressed before. This can be a reach in a lot of situations where you might be writing to an outline, but drawing comparisons or making observations that are novel is something LLM techs are literally incapable of doing.
- Establish provenance. This isn't about the prose itself, but rather how you create it. For this reason, Hit Subscribe asks authors to write in Google docs, which automatically establishes a version history. If a given author can point to 22 saved versions where a person they share the doc with can watch a reconstructing of the drafting process, LLM plagiarism looks extremely unlikely. Contrast this with a 2 step version history: nothing, then a fully-formed, somewhat polished post. The more you can show your work and establish provenance, the more plausible the claim of human creation.