
Introduction
AI slop refers to the overload of convincing but low quality AI-generated content which threatens to overwhelm our media, information and communication landscape. With the proliferation of generative AI (or gen-AI) tools in recent years, AI slop has encroached into spaces that used to be exclusive to human expression.
In this article, I will dissect some short and longer term consequences of AI slop from the perspectives of content consumers and content creators, breaking down the challenges they face in an increasingly crowded and noisy communication environment.
This is the second article within a three-part AI slop series. Interested readers can refer to Part One for examples of AI slop in various genres of media and the incentives that drive its proliferation. The upcoming Part Three goes into systemic impacts.
Content consumers
In an environment overwhelmed by AI slop, the main problem faced by content consumers is having to sift through content that contains high volumes of poor quality information and inauthentic cultural products.
While poor quality content has always existed, the critical difference made by gen-AI lies in two dimensions. Firstly, the level of convincingness makes it much harder to differentiate between AI mimicry and actual human-made content that takes time, energy and resources to produce1.
Here it may help to zoom in on the problem of mimicry, which is the crux of the problem of the difficulty in detecting AI slop. As pointed out by AI researchers like Emily Bender and Margaret Mitchell2, gen-AI models such as LLMs and diffusion models are very good at capturing and mimicking “form”, such as syntax and style. Content generated therefore look persuasive and legitimate at first glance. However, AI systems do not understand “meaning” in the way that humans understand it, and careless mass generation of content undermines the transmission of meaning as the core objective in human communication.
Secondly, there is the sheer volume of the content, given the ease in which they are generated and the accessibility of the tools used to generate it. To give one example, in a Wired reporting on publishing platform Medium.com, two different AI detection tools yielded similar results estimating that more than 40% of articles on Medium were AI-generated, sampled at various points in 20243,4. Part One of this series5 provides more snapshots in time of various media and genres that have experienced the encroachment of AI slop.
In the mundane everyday, AI slop means extra cognitive load and inconvenience for consumers in filtering through and finding useful information and worthy entertainment. However, in times of crisis (such as during an environmental disaster) when it is important to get vital information across quickly, too much noise can drown out lifesaving signals.
The impacts of AI slop on crisis communication are not hypothetical and have been recorded in several disasters globally. From Hurricane Helene in the US in October 2024 (the first major environmental disaster to happen in the era of generative AI6) to Myanmar’s devastating earthquake measuring 7.7 on the Richter scale in March 20257, observers have noted with alarm that AI-generated misinformation misled and delayed official responses and added to the confusion for those who are trying to seek and provide help. In the longer term, the concerns are that the volume of misinformation would erode trust in institutions, and information fatigue would lead to apathy and lessen humanitarian actions such as volunteering and donations.
Content creators
With AI slop, content creators (including - but not limited to – artists, writers, musicians, researchers, etc.) contend with at least three distinct challenges.
The first is competition for audience and market share within their niche, and resulting job displacement within content industries. Gen-AI creates an oversupply of content, diluting the value of every piece. Creators are under pressure to produce constantly, and may choose quantity over quality as they try to remain relevant in the attention economy8.
Within a competitive and increasingly crowded communication environment, jobs within content industries are directly impacted. An example would be the news media industry, which have seen massive restructuring, not only due to competition but also because of economic imperatives to use AI themselves to do more with less. This has already been observed in Malaysia. For example, in June 2024, Media Chinese International (MCI), which owns most major Chinese news dailies in the country, announced that it would integrate AI in its operations and let go of 44% of its staff (800 of 1,800) over the course of five years9.
Secondly, the rise of AI slop has increased consumer backlash against AI-generated content, while platforms are increasingly taking action to prevent fraudulent activity. As consumers and platforms try to detect and weed out AI-generated content, creators bear the burden of having to prove that their content is authentically human-made.
In the video game industry, for instance, large publishers such as EA and Take-Two warn shareholders of reputational risks of using AI10. The vigilante stance of some gamers against AI art used in newly launched games have resulted in accusations of AI use even when there was none. Large game publishers such as Nintendo have been compelled to release statements to defend against false accusations, whereas smaller studios have also had to release process videos to dispel rumours11.
Turning to the music industry, music streaming platforms are increasingly using AI detectors to identify and deprioritise AI-generated content as they observe a pattern of associated fraudulent activity. For example, Deezer reports that its platform receives 20,000 AI-generated tracks (or 18% of new uploads) daily to its platform, and 70% of their streams are by fake listeners, whether bots or humans, to generate fraudulent royalty payments for the scammers12.
To mitigate this, platforms are increasingly using AI to detect AI-generated content or identify suspicious user activity (such as streaming the same song at the same time, across multiple devices). As platforms fight AI with AI, some musicians find themselves facing automated content takedowns with little recourse if the detectors are triggered. Besides having release and marketing plans derailed, these is also material cost to re-uploading the content, with no guarantee that it will not be taken down again.
Effectively, the climate of mistrust on AI use affects all creators, even those who do not actually use it. Bigger players with more resources will be able to weather accusations and takedowns better, compared to independent creators.
Lastly, there is need to acknowledge that creators suffer from uncompensated appropriation of their work in the training of AI models, which then create competition that squeeze them out of the market. Apart from moral considerations, there is again a material impact for creators who want to fight back. The mounting pile of copyright lawsuits in the US13 and globally14 is testament to the struggle.
This space is moving quickly. In September 2025, Anthropic agreed to pay USD 1.5 billion (approx. RM 6.3 billion) to settle a class-action lawsuit raised by authors against the use of their copyrighted works to train its AI model Claude, without permission15. The settlement amounts to $3,000 per book for 500,000 pirated books (to be expanded if more works are uncovered), and authors can file for claims if they find their books in the list of works16.
As stated by the authors’ lawyers, this is “the largest copyright recovery in history", the first of such developments in the AI era, setting a precedent on copyright compensation in works used to training AI models. However, as noted by Aaron Moss, a lawyer specialising in intellectual property, $1.5bil is a small number in proportion to Anthropic’s valuation of $183 billion. This amounts to a “speeding ticket” instead of a “stop sign” in AI companies’ appropriation of creators' works17. Moss also pointed out that the number of books pirated by Anthropic was 7 million, far more than the 500,000 that were deemed eligible for the payouts18.
Conclusion
In this article, I have discussed AI slop’s impacts on content consumers and creators, as two parts of a communication process that are overwhelmed by a flood of AI-generated content. However, the larger point here is not what happens to each group individually, but what happens in between both groups.
The essence of communication is transmitting meaning from one to another, on one side the senders (or content creators) and the other, the receivers (or content consumers). AI slop disrupts this very process, breaking the communication environment by allowing everyone to shout, hence no one to be heard. Noise is increased to the level where meaning is no longer discernible from nonsense, undermining the very point of communicating.
In the next article, we look beyond the individual harms towards content consumers and creators, taking a broader perspective of AI slop’s systemic risks to society at large and AI models themselves.







.avif)
