The post The 5 Biggest ‘Tells’ That Something Was Written By AI appeared on BitcoinEthereumNews.com. In brief New stylometric studies identify recurring patterns in AI prose, including predictable rhythm, uniform sentiment, and low lexical variety. A Washington Post analysis of 328,744 ChatGPT messages reveals heavy reliance on emojis, favorite words, and the cliché pivot of “Not just X, but Y.” Vocabulary tells evolve quickly, but structural habits such as symmetry, neatness, and negative parallelism persist across model generations. Is everything written by AI these days? Is this article? The proliferation of large language models has prompted a new, wary literacy: people can now read a paragraph and wonder who—or what—wrote it. That anxiety exists for good reason. Recent studies continue to show that the ever-increasing flood of machine-generated prose differs from human writing in increasingly not-so-subtle ways, from specific word choice to easily identifiable structural tics. These patterns matter because they affect far more than school essays and research theses; they shape corporate communications, journalism, and interpersonal email in ways that can muddle trust or authenticity. Researchers surveying stylometric detection techniques have found consistent, measurable patterns in lexical variety, clause structure, and function-word distributions—a statistical fingerprint that persists across tasks and prompts. While these tells are shrinking with every model generation—OpenAI just fixed its over reliance on em dashes, for instance—the difference between AI slop and stuff that’s human-written is still large enough to inform how readers and editors approach suspiciously polished text.  A recent Washington Post analysis of 328,744 ChatGPT messages reinforces this point with real-world data. It found that the model leans heavily on emojis, a narrow palette of favorite words, and everyone’s favorite tell, “negative parallelism: “It’s not X, it’s Y;” or “It’s less about X and more about Y.” The Post also warned against overconfidence: none of these traits prove AI authorship; they only raise the probability. Still, when a… The post The 5 Biggest ‘Tells’ That Something Was Written By AI appeared on BitcoinEthereumNews.com. In brief New stylometric studies identify recurring patterns in AI prose, including predictable rhythm, uniform sentiment, and low lexical variety. A Washington Post analysis of 328,744 ChatGPT messages reveals heavy reliance on emojis, favorite words, and the cliché pivot of “Not just X, but Y.” Vocabulary tells evolve quickly, but structural habits such as symmetry, neatness, and negative parallelism persist across model generations. Is everything written by AI these days? Is this article? The proliferation of large language models has prompted a new, wary literacy: people can now read a paragraph and wonder who—or what—wrote it. That anxiety exists for good reason. Recent studies continue to show that the ever-increasing flood of machine-generated prose differs from human writing in increasingly not-so-subtle ways, from specific word choice to easily identifiable structural tics. These patterns matter because they affect far more than school essays and research theses; they shape corporate communications, journalism, and interpersonal email in ways that can muddle trust or authenticity. Researchers surveying stylometric detection techniques have found consistent, measurable patterns in lexical variety, clause structure, and function-word distributions—a statistical fingerprint that persists across tasks and prompts. While these tells are shrinking with every model generation—OpenAI just fixed its over reliance on em dashes, for instance—the difference between AI slop and stuff that’s human-written is still large enough to inform how readers and editors approach suspiciously polished text.  A recent Washington Post analysis of 328,744 ChatGPT messages reinforces this point with real-world data. It found that the model leans heavily on emojis, a narrow palette of favorite words, and everyone’s favorite tell, “negative parallelism: “It’s not X, it’s Y;” or “It’s less about X and more about Y.” The Post also warned against overconfidence: none of these traits prove AI authorship; they only raise the probability. Still, when a…

The 5 Biggest ‘Tells’ That Something Was Written By AI

2025/11/18 04:11

In brief

  • New stylometric studies identify recurring patterns in AI prose, including predictable rhythm, uniform sentiment, and low lexical variety.
  • A Washington Post analysis of 328,744 ChatGPT messages reveals heavy reliance on emojis, favorite words, and the cliché pivot of “Not just X, but Y.”
  • Vocabulary tells evolve quickly, but structural habits such as symmetry, neatness, and negative parallelism persist across model generations.

Is everything written by AI these days? Is this article?

The proliferation of large language models has prompted a new, wary literacy: people can now read a paragraph and wonder who—or what—wrote it. That anxiety exists for good reason.

Recent studies continue to show that the ever-increasing flood of machine-generated prose differs from human writing in increasingly not-so-subtle ways, from specific word choice to easily identifiable structural tics. These patterns matter because they affect far more than school essays and research theses; they shape corporate communications, journalism, and interpersonal email in ways that can muddle trust or authenticity.

Researchers surveying stylometric detection techniques have found consistent, measurable patterns in lexical variety, clause structure, and function-word distributions—a statistical fingerprint that persists across tasks and prompts. While these tells are shrinking with every model generation—OpenAI just fixed its over reliance on em dashes, for instance—the difference between AI slop and stuff that’s human-written is still large enough to inform how readers and editors approach suspiciously polished text.

A recent Washington Post analysis of 328,744 ChatGPT messages reinforces this point with real-world data. It found that the model leans heavily on emojis, a narrow palette of favorite words, and everyone’s favorite tell, “negative parallelism: “It’s not X, it’s Y;” or “It’s less about X and more about Y.”

The Post also warned against overconfidence: none of these traits prove AI authorship; they only raise the probability. Still, when a piece of writing exhibits several of them, the signal gets harder to ignore.

Here are the five strongest signals that a text may have been machine-generated, each anchored in current research.

The 5 most common AI tells

  1. Negative parallelism and oversimplified contrast

    AI overuses the neat, dramatic hinge of “It’s not X, it’s Y,” and its cousin, “not just X, but Y.” These structures create the illusion of insight while supplying very little. Stylometric studies show that LLM outputs tend toward balanced, formulaic clause structures rather than the uneven, intuitive rhythms human writers use. In the Post’s dataset, variations of “not just X, but Y” alone appeared in roughly 6% of all July messages—an astonishing percentage for a single rhetorical tic.

  2. Over-neat structure and conspicuously consistent rhythm

    LLM-generated text often reads like it was written by someone who revises compulsively but never improvises. Paragraphs follow textbook patterns, transitions are frictionless, and the cadence is almost mathematically even, according to a recent analysis in Nature. Human writing—even careful writing—typically reflects digressions, interruptions, tonal shifts, and asymmetric pacing. Stylometric work comparing LLM outputs to human short stories finds that models exhibit far narrower variance in sentence length and syntactic shape.

  3. Smoothed-out emotional tone and overly courteous hedging

    AI tends to sound friendly in a way no adult actually sounds unless they work in HR or customer support. Phrases like “It’s understandable that…” or endings that gently summarize everything (“Ultimately…”) show up with unnatural regularity. Quantitative reviews of detection methods note that LLM-generated prose exhibits more uniform sentiment and fewer abrupt emotional modulations than human text.

  4. Vague abstractions and evolving “safe” vocabulary

    Models rely heavily on generic nouns—”ecosystem,” “framework,” “dynamic”—and verbs like “leverage,” “unlock,” or “navigate” when they run out of specifics. Studies consistently show lower lexical diversity and heavier nominalization in AI text. The Washington Post and Nature analyses also found that certain AI clichés aren’t static: the infamous “delve” has largely faded, replaced by new favorites like “core” and “modern.” This matters because vocabulary tells evolve quickly; structure is more reliable than any fixed word list.

  5. Balanced clauses and conspicuously careful phrasing

    LLMs love symmetry: “While X is true, Y is also important,” or “Whether you’re a beginner or an expert…” These structures feel safe because they avoid commitment. Stylometric studies show that AI text overuses certain function-word patterns and clause constructions at rates that differ sharply from human baselines. Humans tend to be either more abrupt or more discursive; machines aim for diplomatic balance every time.

By the way, most of this article was written by AI.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.

Source: https://decrypt.co/348923/5-biggest-tells-something-written-ai

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

The Best Cranberry-Orange Nut Bread (and It Doesn’t Need A Glaze)

The Best Cranberry-Orange Nut Bread (and It Doesn’t Need A Glaze)

The post The Best Cranberry-Orange Nut Bread (and It Doesn’t Need A Glaze) appeared on BitcoinEthereumNews.com. This festive colorful loaf of cranberry, orange and pecans will delight your friends and family either at home or as a gift. Elizabeth Karmel This time of year, cranberry and orange go together like peanut butter and jelly. It is a classic combination so perfect, it’s practically its own flavor. I’ve always loved the bright, tart combination of cranberry and orange, but most of the cranberry-orange loaves (I’ve tried) miss the mark. Too often, they’re dry and one-dimensional, relying on a sugary glaze for flavor. Last year, I decided to see if I could fix that. I set out to create a version of this popular loaf that would be moist and tender with bursts of tangy cranberry tempered by just enough sweetness and fragrant citrus—and delicious enough to stand on its own without a glaze. I experimented with chopping the cranberries, but ultimately preferred leaving them whole. I love the look of pockets of bright red from whole cranberries and the sharp burst of tart flavor. The liquid in the recipe is mostly fresh-squeezed orange juice. I top off the fresh juice with half and half to soften the acidity, and add a little extra moisture to the crumb. For extra citrus flavor, the zest of those same oranges is rubbed into the sugar to maximize the fragrant oil (from the zest). I often use clementines or mandarins when they are in season because I find them to be even more flavorful. The reverse creaming method calls for butter to be cut into the dry ingredients a.k.a. flour-sugar mixture until it is evenly disbursed and resembles fine sand. Elizabeth Karmel To insure that the loaf bakes tender and moist, I used the reverse creaming method—an easy, foolproof technique that’s less fuss than traditional creaming. Instead of beating butter and…
Share
BitcoinEthereumNews2025/11/18 19:40
Aave DAO to Shut Down 50% of L2s While Doubling Down on GHO

Aave DAO to Shut Down 50% of L2s While Doubling Down on GHO

The post Aave DAO to Shut Down 50% of L2s While Doubling Down on GHO appeared on BitcoinEthereumNews.com. Aave DAO is gearing up for a significant overhaul by shutting down over 50% of underperforming L2 instances. It is also restructuring its governance framework and deploying over $100 million to boost GHO. This could be a pivotal moment that propels Aave back to the forefront of on-chain lending or sparks unprecedented controversy within the DeFi community. Sponsored Sponsored ACI Proposes Shutting Down 50% of L2s The “State of the Union” report by the Aave Chan Initiative (ACI) paints a candid picture. After a turbulent period in the DeFi market and internal challenges, Aave (AAVE) now leads in key metrics: TVL, revenue, market share, and borrowing volume. Aave’s annual revenue of $130 million surpasses the combined cash reserves of its competitors. Tokenomics improvements and the AAVE token buyback program have also contributed to the ecosystem’s growth. Aave global metrics. Source: Aave However, the ACI’s report also highlights several pain points. First, regarding the Layer-2 (L2) strategy. While Aave’s L2 strategy was once a key driver of success, it is no longer fit for purpose. Over half of Aave’s instances on L2s and alt-L1s are not economically viable. Based on year-to-date data, over 86.6% of Aave’s revenue comes from the mainnet, indicating that everything else is a side quest. On this basis, ACI proposes closing underperforming networks. The DAO should invest in key networks with significant differentiators. Second, ACI is pushing for a complete overhaul of the “friendly fork” framework, as most have been unimpressive regarding TVL and revenue. In some cases, attackers have exploited them to Aave’s detriment, as seen with Spark. Sponsored Sponsored “The friendly fork model had a good intention but bad execution where the DAO was too friendly towards these forks, allowing the DAO only little upside,” the report states. Third, the instance model, once a smart…
Share
BitcoinEthereumNews2025/09/18 02:28