Views
Nov 6, 2025
6
Minutes read

AI Slop II: Content Consumers and Creators

Authors
Dr Jun-E Tan
No items found.
Key Takeaways
Data Overview

This is the second part out of a three-part series on AI slop. In this article, I explore some short and longer term consequences of AI slop towards content consumers and content creators – navigating the emerging challenges both groups face in a communication environment increasingly polluted with low-quality AI-generated content.

The realism of AI slop makes it harder for consumers to distinguish between AI-generated and human-made content. The sheer volume of slop that has flooded communication platforms is also concerning. The aftermath of this communication disruption could be a matter of life or death, as we have seen AI-generated content disorient aid efforts during Myanmar’s devastating earthquake in March 2025.

AI slop pollution also requires content creators to compete harder for audience and market share within their niche, and creators are increasingly being crowded out.  Creators also bear the burden of proving that their content is authentic and human made when allegations arise. In addition, a large majority of creators remain uncompensated for their work which was used to train AI, without their consent.

Content consumers and creators are two parts of a communication process that are overwhelmed by a flood of AI-generated content. However, the larger point here is not what happens to each group individually, but what happens in between both groups. AI slop disrupts the transmission of meaning from creator to consumer, undermining the very point of communicating.

ai-slop-ii-content-consumers-and-creators
Views
Individual reflections and analyses on timely topics, offering context and thoughtful viewpoints that help readers better understand emerging trends and policy debates.

Introduction

AI slop refers to the overload of convincing but low quality AI-generated content which threatens to overwhelm our media, information and communication landscape. With the proliferation of generative AI (or gen-AI) tools in recent years, AI slop has encroached into spaces that used to be exclusive to human expression.  

In this article, I will dissect some short and longer term consequences of AI slop from the perspectives of content consumers and content creators, breaking down the challenges they face in an increasingly crowded and noisy communication environment.

This is the second article within a three-part AI slop series. Interested readers can refer to Part One for examples of AI slop in various genres of media and the incentives that drive its proliferation. The upcoming Part Three goes into systemic impacts.  

Content consumers

In an environment overwhelmed by AI slop, the main problem faced by content consumers is having to sift through content that contains high volumes of poor quality information and inauthentic cultural products.  

While poor quality content has always existed, the critical difference made by gen-AI lies in two dimensions. Firstly, the level of convincingness makes it much harder to differentiate between AI mimicry and actual human-made content that takes time, energy and resources to produce1.  

Here it may help to zoom in on the problem of mimicry, which is the crux of the problem of the difficulty in detecting AI slop. As pointed out by AI researchers like Emily Bender and Margaret Mitchell2, gen-AI models such as LLMs and diffusion models are very good at capturing and mimicking “form”, such as syntax and style. Content generated therefore look persuasive and legitimate at first glance. However, AI systems do not understand “meaning” in the way that humans understand it, and careless mass generation of content undermines the transmission of meaning as the core objective in human communication.  

Secondly, there is the sheer volume of the content, given the ease in which they are generated and the accessibility of the tools used to generate it. To give one example, in a Wired reporting on publishing platform Medium.com, two different AI detection tools yielded similar results estimating that more than 40% of articles on Medium were AI-generated, sampled at various points in 20243,4. Part One of this series5 provides more snapshots in time of various media and genres that have experienced the encroachment of AI slop.

In the mundane everyday, AI slop means extra cognitive load and inconvenience for consumers in filtering through and finding useful information and worthy entertainment. However, in times of crisis (such as during an environmental disaster) when it is important to get vital information across quickly, too much noise can drown out lifesaving signals.  

The impacts of AI slop on crisis communication are not hypothetical and have been recorded in several disasters globally. From Hurricane Helene in the US in October 2024 (the first major environmental disaster to happen in the era of generative AI6) to Myanmar’s devastating earthquake measuring  7.7 on the Richter scale in March 20257, observers have noted with alarm that AI-generated misinformation misled and delayed official responses and added to the confusion for those who are trying to seek and provide help. In the longer term, the concerns are that the volume of misinformation would erode trust in institutions, and information fatigue would lead to apathy and lessen humanitarian actions such as volunteering and donations.  

Content creators

With AI slop, content creators (including - but not limited to – artists, writers, musicians, researchers, etc.) contend with at least three distinct challenges.  

The first is competition for audience and market share within their niche, and resulting job displacement within content industries. Gen-AI creates an oversupply of content, diluting the value of every piece. Creators are under pressure to produce constantly, and may choose quantity over quality as they try to remain relevant in the attention economy8.  

Within a competitive and increasingly crowded communication environment, jobs within content industries are directly impacted. An example would be the news media industry, which have seen massive restructuring, not only due to competition but also because of economic imperatives to use AI themselves to do more with less. This has already been observed in Malaysia. For example, in June 2024, Media Chinese International (MCI), which owns most major Chinese news dailies in the country, announced that it would integrate AI in its operations and let go of 44% of its staff (800 of 1,800) over the course of five years9.

Secondly, the rise of AI slop has increased consumer backlash against AI-generated content, while platforms are increasingly taking action to prevent fraudulent activity. As consumers and platforms try to detect and weed out AI-generated content, creators bear the burden of having to prove that their content is authentically human-made.

In the video game industry, for instance, large publishers such as EA and Take-Two warn shareholders of reputational risks of using AI10. The vigilante stance of some gamers against AI art used in newly launched games have resulted in accusations of AI use even when there was none. Large game publishers such as Nintendo have been compelled to release statements to defend against false accusations, whereas smaller studios have also had to release process videos to dispel rumours11.  

Turning to the music industry, music streaming platforms are increasingly using AI detectors to identify and deprioritise AI-generated content as they observe a pattern of associated fraudulent activity. For example, Deezer reports that its platform receives 20,000 AI-generated tracks (or 18% of new uploads) daily to its platform, and 70% of their streams are by fake listeners, whether bots or humans, to generate fraudulent royalty payments for the scammers12.  

To mitigate this, platforms are increasingly using AI to detect AI-generated content or identify suspicious user activity (such as streaming the same song at the same time, across multiple devices). As platforms fight AI with AI, some musicians find themselves facing automated content takedowns with little recourse if the detectors are triggered. Besides having release and marketing plans derailed, these is also material cost to re-uploading the content, with no guarantee that it will not be taken down again.  

Effectively, the climate of mistrust on AI use affects all creators, even those who do not actually use it. Bigger players with more resources will be able to weather accusations and takedowns better, compared to independent creators.  

Lastly, there is need to acknowledge that creators suffer from uncompensated appropriation of their work in the training of AI models, which then create competition that squeeze them out of the market. Apart from moral considerations, there is again a material impact for creators who want to fight back. The mounting pile of copyright lawsuits in the US13 and globally14 is testament to the struggle.  

This space is moving quickly. In September 2025, Anthropic agreed to pay USD 1.5 billion (approx. RM 6.3 billion) to settle a class-action lawsuit raised by authors against the use of their copyrighted works to train its AI model Claude, without permission15. The settlement amounts to $3,000 per book for 500,000 pirated books (to be expanded if more works are uncovered), and authors can file for claims if they find their books in the list of works16.  

As stated by the authors’ lawyers, this is “the largest copyright recovery in history", the first of such developments in the AI era, setting a precedent on copyright compensation in works used to training AI models. However, as noted by Aaron Moss, a lawyer specialising in intellectual property, $1.5bil is a small number in proportion to Anthropic’s valuation of $183 billion. This amounts to a “speeding ticket” instead of a “stop sign” in AI companies’ appropriation of creators' works17. Moss also pointed out that the number of books pirated by Anthropic was 7 million, far more than the 500,000 that were deemed eligible for the payouts18.  

Conclusion

In this article, I have discussed AI slop’s impacts on content consumers and creators, as two parts of a communication process that are overwhelmed by a flood of AI-generated content. However, the larger point here is not what happens to each group individually, but what happens in between both groups.  

The essence of communication is transmitting meaning from one to another, on one side the senders (or content creators) and the other, the receivers (or content consumers). AI slop disrupts this very process, breaking the communication environment by allowing everyone to shout, hence no one to be heard. Noise is increased to the level where meaning is no longer discernible from nonsense, undermining the very point of communicating.

In the next article, we look beyond the individual harms towards content consumers and creators, taking a broader perspective of AI slop’s systemic risks to society at large and AI models themselves.

Read Full Publication

Article highlight

featured report

Conclusion

Download Resources
Files
Attributes
Footnotes
  1. Velásquez-Salamanca, Martín-Pascual, and Andreu-Sánchez (2025)
  2. Bender et al. (2021)
  3. Emi and Spero (2024)
  4. Responding to the article, the CEO of Medium stated that even though there was an uptick in AI-generated content on its site, most of it remained unread as Medium’s content policy prioritises human-created content. This stands in contrast with other platform policies, such as LinkedIn and Meta which actively encourage the use of AI to generate content.
  5. Tan (2025)
  6. Kayyem (2024)
  7. Landicho and Trajano (2025)
  8. Erickson (2024)
  9. CIJ Malaysia (2024)
  10. Schreier (2025)
  11. Carpenter (2025)
  12. Malik (2025)
  13. Chat GPT Is Eating the World (2025)
  14. Assunção (2025)
  15. Brittain and Scarcella (2025)
  16. JND Legal Administration, n.d.
  17. Moss (2025)
  18. According to Moss, “only works registered with the U.S. Copyright Office within five years of publication and before Anthropic downloaded them qualify for settlement proceeds”.
References

1 Velásquez-Salamanca, Daniela, Miguel Ángel Martín-Pascual, and Celia Andreu-Sánchez. 2025. "Interpretation of Al-Generated vs. Human-Made Images." Journal of Imaging 11 (7):227. https://doi.org/10.3390/jimaging11070227. [cite: 129, 130]

2 Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?." In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610-23. FAccT '21. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3442188.3445922. [cite: 101, 102, 103, 104, 105]

3 Emi, Bradley, and Max Spero. 2024. "Technical Report on the Pangram Al-Generated Text Classifier." arXiv. https://doi.org/10.48550/arXiv.2402.14873. [cite: 115]

4 Tan (2025) - This refers to the first article in the series, "AI Slop I: Pollution in Our Communication Environment." [cite: 44]

6 Kayyem, Juliette. 2024. "The Fog of Disaster Is Getting Worse." The Atlantic (blog). October 5, 2024. https://www.theatlantic.com/ideas/archive/2024/10/hurricane-helene-misinformation-ai/680160/. [cite: 120, 122, 123]

7 Landicho, Keith Paolo Catibog, and Karryl Kim Sagun Trajano. 2025. "Disasters and Disinformation: Al and the Myanmar 7.7 Magnitude Earthquake." May 1, 2025. https://rsis.edu.sg/rsis-publication/idss/ip25055-disasters-and-disinformation-ai-and-the-myanmar-7-7-magnitude-earthquake/?utm_source=chatgpt.com. [cite: 124, 125]

8 Erickson, Kristofer. 2024. "Al and Work in the Creative Industries: Digital Continuity or Discontinuity?" Creative Industries Journal 0 (0). Routledge:1-21. https://doi.org/10.1080/17510694.2024.2421135. [cite: 116, 117, 118]

10 Schreier, Jason. 2025. "Video-Game Companies Have an Al Problem: Players Don't Want It." Bloomberg.Com, May 23, 2025. https://www.bloomberg.com/news/newsletters/2025-05-23/video-game-companies-have-an-ai-problem-players-don-t-want-it. [cite: 128]

11 Carpenter, Nicole. 2025. "A Real Issue: Video Game Developers Are Being Accused of Using Al - Even When They Aren't." The Guardian, June 26, 2025, sec. Games. https://www.theguardian.com/games/2025/jun/26/video-game-developers-using-ai-even-when-they-arent-stamina-zero. [cite: 108, 109]

12 Malik, Aisha. 2025. "Deezer Starts Labeling Al-Generated Music to Tackle Streaming Fraud." TechCrunch (blog). June 20, 2025. https://techcrunch.com/2025/06/20/deezer-starts-labeling-ai-generated-music-to-tackle-streaming-fraud/. [cite: 126]

14 Assunção, Isadora Valadares. 2025. "Beyond Regulation: What 500 Cases Reveal About the Future of Al in the Courts

15 Brittain, Blake, and Mike Scarcella. 2025. "Anthropic Agrees to Pay $1.5 Billion to Settle Author Class Action." Reuters, September 5, 2025, sec. Boards, Policy & Regulation. https://www.reuters.com/sustainability/boards-policy-regulation/anthropic-agrees-pay-15-billion-settle-author-class-action-2025-09-05/. [cite: 106, 107]

17 Moss, Aaron. 2025. "Anthropic's $1.5 Billion Speeding Ticket." Copyright Lately (blog). September 8, 2025. https://copyrightlately.com/anthropic-settlement/. [cite: 127]

Photography Credit

Related to this Publication

No results found for this selection
You can  try another search to see more

Want more stories like these in your inbox?

Stay ahead with KRI, sign up for research updates, events, and more

Thanks for subscribing. Your first KRI newsletter will arrive soon—filled with fresh insights and research you can trust.

Oops! Something went wrong while submitting the form.
Follow Us On Our Socials