Disinformation Economics

“False information is no longer just a political problem—it’s a balance sheet risk.” Disinformation economics is the study and management of the measurable financial damage caused by the deliberate spread of false or misleading information through digital, social, and AI-generated media channels.

Executive Summary

The World Economic Forum ranked disinformation as one of the top two global risks for 2025. A University of Baltimore study estimated the annual global cost of fake news at $78 billion, including $39 billion in stock market losses alone. The weaponization of generative AI—enabling state actors and market manipulators to produce high-fidelity false narratives at near-zero cost—has elevated disinformation from a reputational hazard to a systemic financial risk. Boards, treasuries, and central banks are now treating it as a measurable enterprise risk category.

The Strategic Mechanism

Disinformation imposes financial costs through four distinct channels:

  • Market manipulation: False earnings reports, fabricated M&A rumors, and AI-generated fake regulatory filings are used to engineer short-term price moves for pump-and-dump schemes. AI lowers the production cost of such content to near zero.
  • Corporate reputational attack: State or competitor-sponsored campaigns targeting supply chain ethics, product safety, or executive misconduct cause measurable stock drawdowns, customer attrition, and insurance repricing.
  • Sovereign credit interference: False narratives about debt restructuring, central bank policy, or political instability in emerging markets can trigger capital flight and currency depreciation disproportionate to fundamentals.
  • Investor behavioral distortion: A 2025 Frontiers study found that 22.6% of surveyed investors admitted disinformation narratives had directly influenced their investment decisions—a behavioral transmission channel now embedded in market microstructure.

Market & Policy Impact

  • Deepfake liability: AI-generated audio and video impersonating CEOs or central bank governors during earnings calls or press conferences represent an emerging class of securities fraud.
  • Brand insurance repricing: Underwriters are developing disinformation riders for D&O and crisis communications policies, creating a nascent insurance market around information risk.
  • Regulatory response: The EU’s Digital Services Act (DSA) mandates disinformation risk assessments for very large online platforms, creating compliance costs and data disclosure obligations.
  • Intelligence budgets: Corporations with high geopolitical exposure are internalizing threat intelligence functions previously reserved for national security agencies.
  • Credit market spillover: False sovereign credit narratives amplified by social media have contributed to EM bond spread widening events disconnected from actual fiscal fundamentals, as seen in multiple 2024 episodes.

Modern Case Study: AI-Amplified Market Manipulation, 2024–2025

In October 2025, NPR documented a systematic pattern of AI-generated fake financial news articles targeting mid-cap U.S. equities. These articles—indistinguishable from legitimate reporting—were published on domains mimicking established financial media, indexed by search engines, and amplified by social media bots before fact-checkers could intervene. The targeted equities showed average intraday price moves of 4–7% before corrections, generating arbitrage profits for the originating actors. The SEC initiated rule-making on AI-generated securities content in Q4 2025, and FINRA issued guidance requiring broker-dealers to implement AI content authentication protocols. In parallel, Indonesia’s 2025 financial regulator documented a campaign of economic conspiracy theories spread via social media that prompted retail investors to withdraw bank deposits and buy physical gold at scale—demonstrating that disinformation economics is equally potent in emerging markets with lower financial literacy baselines.