The post Machines can’t separate truth from noise appeared on BitcoinEthereumNews.com. Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial. We marvel at how intelligent the latest AI models have become — until they confidently present us with complete nonsense. The irony is hard to miss: as AI systems grow more powerful, their ability to distinguish fact from fiction isn’t necessarily improving. In some ways, it’s getting worse. Summary AI reflects our information flaws. Models like GPT-5 struggle because training data is polluted with viral, engagement-driven content that prioritizes sensation over accuracy. Truth is no longer zero-sum. Many “truths” coexist, but current platforms centralize information flow, creating echo chambers and bias that feed both humans and AI. Decentralized attribution fixes the cycle. Reputation- and identity-linked systems, powered by crypto primitives, can reward accuracy, filter noise, and train AI on verifiable, trustworthy data. Consider OpenAI’s own findings: one version of GPT-4 (code-named “o3”) hallucinated answers about 33% of the time in benchmark tests, according to the company’s own paper. Its smaller successor (“o4-mini”) went off the rails nearly half the time. The newest model, GPT-5, was supposed to fix this and indeed claims to hallucinate far less (~9%). Yet many experienced users find GPT-5 dumber in practice—slower, more hesitant, and still often wrong (also evidencing the fact that benchmarks only get us so far). Nillion CTO, John Woods’, frustration was explicit when he said ChatGPT went from ‘essential to garbage’ after GPT-5’s release. Yet the reality is, the more advanced models will get increasingly worse at telling truth from noise. All of them, not just GPT.  Incredible how ChatGPT Plus went from essential to garbage with the release GPT-5. Most queries routed to tiny incapable models, a 32K context window and dogshit usage limits, and they still get… The post Machines can’t separate truth from noise appeared on BitcoinEthereumNews.com. Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial. We marvel at how intelligent the latest AI models have become — until they confidently present us with complete nonsense. The irony is hard to miss: as AI systems grow more powerful, their ability to distinguish fact from fiction isn’t necessarily improving. In some ways, it’s getting worse. Summary AI reflects our information flaws. Models like GPT-5 struggle because training data is polluted with viral, engagement-driven content that prioritizes sensation over accuracy. Truth is no longer zero-sum. Many “truths” coexist, but current platforms centralize information flow, creating echo chambers and bias that feed both humans and AI. Decentralized attribution fixes the cycle. Reputation- and identity-linked systems, powered by crypto primitives, can reward accuracy, filter noise, and train AI on verifiable, trustworthy data. Consider OpenAI’s own findings: one version of GPT-4 (code-named “o3”) hallucinated answers about 33% of the time in benchmark tests, according to the company’s own paper. Its smaller successor (“o4-mini”) went off the rails nearly half the time. The newest model, GPT-5, was supposed to fix this and indeed claims to hallucinate far less (~9%). Yet many experienced users find GPT-5 dumber in practice—slower, more hesitant, and still often wrong (also evidencing the fact that benchmarks only get us so far). Nillion CTO, John Woods’, frustration was explicit when he said ChatGPT went from ‘essential to garbage’ after GPT-5’s release. Yet the reality is, the more advanced models will get increasingly worse at telling truth from noise. All of them, not just GPT.  Incredible how ChatGPT Plus went from essential to garbage with the release GPT-5. Most queries routed to tiny incapable models, a 32K context window and dogshit usage limits, and they still get…

Machines can’t separate truth from noise

2025/10/30 20:53

Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial.

We marvel at how intelligent the latest AI models have become — until they confidently present us with complete nonsense. The irony is hard to miss: as AI systems grow more powerful, their ability to distinguish fact from fiction isn’t necessarily improving. In some ways, it’s getting worse.

Summary

  • AI reflects our information flaws. Models like GPT-5 struggle because training data is polluted with viral, engagement-driven content that prioritizes sensation over accuracy.
  • Truth is no longer zero-sum. Many “truths” coexist, but current platforms centralize information flow, creating echo chambers and bias that feed both humans and AI.
  • Decentralized attribution fixes the cycle. Reputation- and identity-linked systems, powered by crypto primitives, can reward accuracy, filter noise, and train AI on verifiable, trustworthy data.

Consider OpenAI’s own findings: one version of GPT-4 (code-named “o3”) hallucinated answers about 33% of the time in benchmark tests, according to the company’s own paper. Its smaller successor (“o4-mini”) went off the rails nearly half the time. The newest model, GPT-5, was supposed to fix this and indeed claims to hallucinate far less (~9%). Yet many experienced users find GPT-5 dumber in practice—slower, more hesitant, and still often wrong (also evidencing the fact that benchmarks only get us so far).

Nillion CTO, John Woods’, frustration was explicit when he said ChatGPT went from ‘essential to garbage’ after GPT-5’s release. Yet the reality is, the more advanced models will get increasingly worse at telling truth from noise. All of them, not just GPT. 

Why would a more advanced AI feel less reliable than its predecessors? One reason is that these systems are only as good as their training data, and the data we’re giving AI is fundamentally flawed. Today, this data largely comes from an information paradigm where engagement trumps accuracy while centralized gatekeepers amplify noise over signal to maximize profits. It’s thus naive to expect truthful AI without first fixing the data problem.

AI mirrors our collective information poisoning

High-quality training data is disappearing faster than we create it. There’s a recursive degradation loop at work: AI primarily digests web-based data; the web is becoming increasingly polluted with misleading, unverifiable AI slop; synthetic data trains the next generation of models to be even more disconnected from reality. 

More than bad training sets, it’s about the fundamental architecture of how we organize and verify information online. Over 65% of the world’s population spends hours on social media platforms designed to maximize engagement. We’re thus exposed, at an unprecedented scale, to algorithms that inadvertently reward misinformation. 

False stories trigger stronger emotional responses, so they spread faster than the corrective claims. Thus, the most viral content — i.e., the one most likely to be ingested by AI training pipelines — is systematically biased towards sensation over accuracy. 

Platforms also profit from attention, not truth. Data creators are rewarded for virality, not veracity. AI companies optimize for user satisfaction and engagement, not factual accuracy. And ‘success’ for chatbots is keeping users hooked with plausible-sounding responses.

That said, AI’s data/trust crisis is really an extension of the ongoing poisoning of our collective human consciousness. We’re feeding AI what we’re consuming ourselves. AI systems can’t tell the truth from noise, because we ourselves can’t. 

Truth is consensus after all. Whoever controls the information flow also controls the narratives we collectively perceive as ‘truth’ after they’re repeated enough times. And right now, a bunch of massive corporations hold the reins to truth, not us as individuals. That can change. It must. 

Truthful AI’s emergence is a positive-sum game

How do we fix this? How do we realign our information ecosystem — and by extension, AI — toward truth? It starts with reimagining how truth is created and maintained in the first place.

In the status quo, we often treat truth as a zero-sum game decided by whoever has the loudest voice or the highest authority. Information is siloed and tightly controlled; each platform or institution pushes its own version of reality. An AI (or a person) stuck in one of these silos ends up with a narrow, biased worldview. That’s how we get echo chambers, and that’s how both humans and AI wind up misled.

But many truths in life are not binary, zero-sum propositions. In fact, most meaningful truths are positive-sum — they can coexist and complement each other. What’s the “best” restaurant in New York? There’s no single correct answer, and that’s the beauty of it: the truth depends on your taste, your budget, your mood. My favorite song, being a jazz classic, doesn’t make your favorite pop anthem any less “true” for you. One person’s gain in understanding doesn’t have to mean another’s loss. Our perspectives can differ without nullifying each other.

This is why verifiable attribution and reputation primitives are so critical. Truth can’t just be about the content of a claim — it has to be about who is making it, what their incentives are, and how their past record holds up. If every assertion online carried with it a clear chain of authorship and a living reputation score, we could sift through noise without ceding control to centralized moderators. A bad-faith actor trying to spread disinformation would find their reputation degraded with every false claim. A thoughtful contributor with a long track record of accuracy would see their reputation — and influence — rise.

Crypto gives us the building blocks to make this work: decentralized identifiers, token-curated registries, staking mechanisms, and incentive structures that turn accuracy into an economic good. Imagine a knowledge graph where every statement is tied to a verifiable identity, every perspective carries a reputation score, and every truth claim can be challenged, staked against, and adjudicated in an open system. In that world, truth isn’t handed down from a single platform — it emerges organically from a network of attributed, reputationally-weighted voices.

Such a system flips the incentive landscape. Instead of content creators chasing virality at the expense of accuracy, they’d be staking their reputations — and often literal tokens — on the validity of their contributions. Instead of AI training on anonymous slop, it would be trained on attributed, reputation-weighted data where truth and trustworthiness are baked into the fabric of the information itself.

Now consider AI in this context. A model trained on such a reputation-aware graph would consume a much cleaner signal. It wouldn’t just parrot the most viral claim; it would learn to factor in attribution and credibility. Over time, agents themselves could participate in this system — staking on their outputs, building their own reputations, and competing not just on eloquence but on trustworthiness.

That’s how we break the cycle of poisoned information and build AI that reflects a positive-sum, decentralized vision of truth. Without verifiable attribution and decentralized reputation, we’ll always be stuck outsourcing “truth” to centralized platforms, and we’ll always be vulnerable to manipulation. 

With them, we can finally move beyond zero-sum authority and toward a system where truth emerges dynamically, resiliently, and — most importantly — together.

Billy Luedtke

Billy Luedtke has been building at the frontier of blockchain since Bitcoin in 2012 and Ethereum in 2014. He helped launch EY’s blockchain consulting practice and spent over five years at ConsenSys shaping the Ethereum ecosystem through roles in R&D, Developer Relations, token engineering, and decentralized identity. Billy also helped pioneer self-sovereign identity as Enterprise Lead at uPort, Co-Chair of the EEA’s Digital Identity Working Group, and a founding member of the Decentralized Identity Foundation. Today, he is the founder of Intuition, the native chain for Information Finance, transforming identities, claims, and reputation into verifiable, monetizable data for the next internet.

Source: https://crypto.news/ais-blind-spot-machines-cant-separate-truth-from-noise/

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Share Insights

You May Also Like

Forget Cardano, Why Shiba Inu’s Shibarium Is The Real Ghost Chain

Forget Cardano, Why Shiba Inu’s Shibarium Is The Real Ghost Chain

Shiba Inu’s effort to grow beyond being a meme coin is struggling. Its blockchain network, Shibarium, was created to bring real use and value to the project, but it has not gained much attention or activity. Developer interest and user engagement are very low, and the network’s overall growth has slowed down sharply. Recent network issues, including technical troubles and security problems, have made things worse. Many users have left, and new projects are not joining. As a result, Shibarium now shows very little activity, leading many in the crypto community to call it a “ghost chain.”  Shiba Inu’s Struggle To Evolve Beyond A Meme Coin Shiba Inu tried to change its image from a simple meme coin into a real blockchain project capable of competing with other networks. The team launched Shibarium, a layer-2 blockchain, in 2023 to help make this move. However, this plan has not worked as expected, with Shibarium failing to attract developers, projects, or users and gaining no market share. Related Reading: XRP At $1,000 Is Peanuts If Used To Clear US National Debt; Pundit Explains According to data from DeFi Llama, Shibarium has only 18 developers since it began. It is a much lower number than on other blockchains, which have hundreds or even thousands of active developers. The total value locked (TVL) on the network, which shows how much money people have invested in it, has fallen to just $878,000.  Shibarium has also failed to attract any stablecoins, which are among the most widely used tokens in decentralized finance. Not a single stablecoin project has deployed on the network, reflecting Shibarium’s lack of presence in one of the most critical areas of the crypto world. Other newer and more active layer-2 networks like Base, Arbitrum, Plasma, and Linea have already moved far ahead, leaving Shibarium behind. Hacks And The Decline Of Shibarium Network Activity Things got worse for the network when ShibaSwap, the most popular decentralized app (dApp) on the Shibarium network, was recently compromised. The attack eroded user confidence and forced developers to pause a key bridge connecting Shibarium to other networks. Even with the bridge now active, most of the network’s activity stopped. Many users could not move their tokens or use apps, making the network almost entirely silent. Related Reading: Here’s Why The XRP Price Still Isn’t Bearish Despite The 50% Flash Crash Because of this drop in network activity, Shibarium is no longer helping burn SHIB tokens. Typically, a portion of network transaction fees goes toward buying and burning Shiba Inu tokens, helping reduce supply and support the token’s price. But now, with very few transactions, the burn process has slowed down significantly. The decline in users, developers, and activity are indicators that Shibarium’s dream of becoming a strong, useful blockchain has not come to fruition. Instead of growing into a central crypto platform, it has become what some would call the real ghost chain.  Featured image created with Dall.E, chart from Tradingview.com
Share
NewsBTC2025/10/31 03:00