OpenAI CEO Sam Altman warns that a surge in AI generated accounts and posts driven by large language models is making social media feel fake. The rise of convincing AI content erodes trust, amplifies misinformation and alters how communities form and engage online.
Meta Description: OpenAI CEO Sam Altman says AI bots are making social media feel fake and eroding trust in online conversations.
Sam Altman, the OpenAI CEO, recently warned that social media increasingly feels "fake" due to a rapid rise in AI generated accounts and posts powered by large language models. His observations, drawn from Reddit communities and activity on X, point to a shift in how people experience online conversations. When users ask, "Is this social media post real?" the answer is becoming harder to trust.
The rise of advanced LLMs has lowered the barrier to creating convincing content at scale. What once required heavy technical skills is now possible for many users with access to common AI tools. That creates new risks for every person who relies on social feeds for news, support, and community.
Altman pointed to activity in Reddit communities organized around AI where many accounts now appear to be run or heavily aided by LLMs. Researchers also note that a significant share of internet traffic is non human and that detection methods that relied on repetitive patterns struggle against modern models that vary language and context.
Some cybersecurity estimates suggest that up to 40 percent of content on major social platforms may involve some level of AI assistance, from fully automated agents to AI aided human posts. That statistic highlights the scale of the problem for platform trust and content reliability.
Practical steps users and moderators can take right now include:
Addressing this challenge requires technical, design, and community measures. Some promising approaches include:
Altman admitting that social media feels fake is notable because it comes from a leader in AI development. The problem of AI generated content is not purely technical. It touches how people form trust online and how communities function. A multi layer response that combines better detection, platform level transparency, identity innovations, and digital literacy education will be needed to preserve authentic online community experiences.
For readers asking what to do next, start with small steps: learn how to spot bots, demand clear digital trust signals from platforms, and support community led fact checking. Those actions make it harder for artificial consensus to pass as genuine public opinion.