
Introduction
AI slop, a flood of low quality AI-generated content, pollutes our communication environment with noise that drowns out important signals.
In this article, I explore some systemic effects of AI slop, first from the angle of society at large, with a particular focus on democracy and science as institutions that have been disrupted. Next, I look into the concept of model collapse, where AI models trained on AI-generated content gradually degrade to the point of failure.
This is the third and final article within the AI slop series. Interested readers can refer to Part 1 for examples of AI slop in various genres of media and a discussion on the incentives that drive its proliferation. Part 2 addresses consequences of AI slop on content consumers and content creators.
Society at large
AI slop leaching into the communication and media environment has generated immense impacts at the societal level. As argued in my previous work[1],[2], societal harms associated with AI can be viewed with a relational perspective, where the unit of analysis of harm[3]shifts from the level of individuals to relationships or connections between them.
It is therefore important to isolate the impacts of AI slop from effects on individuals (e.g. creators or consumers) to also focus on macro-level disruptions on social structures that are fundamental to the functioning of modern society. AI slop changes the nature and quality of relationships in societies, and erodes the strength of institutions as the information environment becomes crowded with AI-generated content. We offer two examples.
AI, democracy and public trust
Democracy as a system of government - through power conferred on elected representatives, or directly held by citizens themselves - relies on citizens having access to accurate and timely information to make informed decisions[4].The information intake is not only important at an individual level, but also at a community level, where factual information ideally creates a shared social reality to support public discourse and policy deliberations.
With the proliferation of cheaply generated content, global elections in recent years have had to contend with AI-generated content. For example, in 2024, a year unusually concentrated with general elections globally, researchers tracked 50 elections and found that 80% of these elections had incidents connected to gen-AI, with 215 incidents logged. Out of these incidents, more than two thirds (69%) had “a harmful role” in the election[5].
Politics and elections are therefore a fertile ground for AI-enabled disinformation campaigns. However, the more insiduous effects of AI slop does not require the intent of deception and manipulation. With a communication and media environment overwhelmed with low quality, mass-generated content, the larger concern here is not that people will believe in false information, but that they will find it difficult to believe what they see and read at all.
The uncertainty about the veracity of information can result in people defaulting into existing biases and identity politics, tearing apart the social fabric that enables inter-group engagement that builds consensus and a shared vision of the future. The general lack of public trust also undermines the legitimacy of journalistic content and democratic processes, further exacerbating the problem in a vicious cycle.
The rapid improvement of gen-AI models means that the convincingness of synthetic media continues to increase, while the cost of creating such content plummets to almost zero. Open AI’s Sora 2, for example, is a case in point. Released in October 2025 with minimal guardrails[6],the ability to generate realistic short videos with the likenesses of real (and deceased) people is put into the hands of millions, truly propelling us into a post-truth era.
AI’s impact on science
Another example of AI slop’s societal effects can be found in the field of science, a bedrock of human understanding of the world. In a cogently argued article, Princeton scholars Say ash Kapoor and Arvind Narayanan ask the question, “Could AI slow science?” They answer:
“Even if individual scientists benefit from adopting AI, it doesn’t mean science as a whole will benefit. […] So far, on balance, AI has been an unhealthy shock to science, stretching many of its processes to the breaking point[7].”
To set the scene, Kapoor and Narayanan cite an array of literature from the field of metascience (the science of science), showing that scientific progress has been slowing down despite increased production of scientific papers and studies. While the rate of paper publication has increased500-fold between 1900 and 2015, the rate of genuine breakthroughs that have advanced leaps in understanding has either remained constant or even slowed down, depending on the measure used. This is what is known as the “production-progress paradox”.
There may be different explanations for this paradox, but the researchers suggest that the most likely reason is that the increase in production itself is slowing down scientific progress. The volume of papers creates a lot of noise that drowns out the important signals of truly novel work, and top-cited papers which do not depart from the canon get most of the attention. Importantly, the imperative to publish (or perish) also reduces the time spent in developing breakthrough work, incentivising authors to work on less risky and more established areas.
The production-progress paradox is a systemic issue, existing before the generative AI boom. Post boom, in recent years, Large Language Models (LLMs) have been found to be widely used in academic papers published[8], remarkably scaling the productivity of scientists[9]. The increase of subtle errors, characteristic of AI slop, compounds the problem. Traditional means of quality control, such as the peer review process, cannot cope with the volume and are also getting encroached by AI (with AI being used to review papers),creating questions of trust in the institution of science[10].
Attempting to adapt to the situation, some scientific institutions including universities and academic publishers have built policies for AI use and guidelines for disclosure of use. While this is a good place to start, a study on thirty such policies in American universities ultimately concludes that currently available institutional guidance places the onus of accountability and compliance on individual researchers[11], something that may not be sustainable in the long term.
In any case, the space continues to evolve. Kende Kefale, a researcher from the University of Cape town suggests that in the longer term, universities may evolve from being “[places] where knowledge is primarily generated to [places] where it is rigorously validated” (emphasis in original) to remain relevant. For that, curricula have to be radically adapted; tenures and promotion shave to assess and reward success differently; and a university’s brand will hinge on its reputation for rigour[12].
AI companies and their models
Beyond the impacts to societal systems, the overwhelm of synthetic content can even negatively impact the generative AI models themselves. Many of these models are trained on vast datasets, in many cases scraped from the internet.
As has been demonstrated [13],[14], post 2022, AI slop has encroached into most of our online spaces, and is difficult to differentiate from human-made data. This means that current and future attempts to trawl the web for data to further train LLMs will inevitably include AI slop into training datasets.
“Model collapse” refers to a gradual degradation of performance of gen-AI models when they are trained on AI-generated content. In a widely cited paper in Nature, Shumailovand colleagues, who coined the term, find that “indiscriminate use of model-generated content in training causes irreversible defects in the resulting models, in which tails of the original content distribution disappear”[15].
One way to think about this is making photocopies of photocopies, where the fidelity to the original gets progressively less after each round of copying. In early stages of model collapse, data that occur less within the dataset will be lost. At a late stage, the data distribution would converge so much that it would look nothing like the original data, baked in with compounded errors that were generated in previous iterations[16].
It is in the companies’ interest to ensure that there is enough of diverse human-made content that has gone through adequate quality control, to train their next generations of models[17]. There is already talk of “peak data”, where AI companies run out of online content that can be scraped for model training[18].
To somewhat mitigate this risk (and the risk of copyright lawsuits), leading AI companies now have licensing deals with major media companies, to use their content in exchange for visibility, attribution and compensation[19]. However, the scale of these mitigations may not be sufficient for the amount of training data needed. Smaller independent publishers who do not have access to these deals will disappear or struggle to remain afloat, as they lose traffic from search engine click-throughs with the rise of AI search[20].
Conclusion
While numbers on productivity gains from gen-AI have been projected and bandied about, we need to recognise that those benefits will not be evenly distributed and there will be short term shocks and long term consequences on all. In this article series on AI slop, I have sought to expand on these impacts from multiple angles, to illustrate the present and potential outcomes of minimising the barriers to content creation and flooding the communication environment with low quality AI media.
Discussions on negative consequences of AI-generated content should move beyond the narrow focuses of malicious use and material impacts on creatives. Much more research needs to be done on societal and institutional effects of AI slop, acknowledging that the consequences can come from thoughtless and unfettered use of gen-AI as well. This is especially urgent outside the context of the Global North, where technologies for AI detection work less well[21], and cultural contexts and digital literacy levels are different.
Adapting to the post-truth world becomes increasingly crucial as a matter of societal resilience. Governments and institutions will have to put in place governance and regulatory mechanisms, such as setting in place AI use guidelines or regulations, labeling AI content, or allocating more resources to AI detection. Widespread awareness and literacy campaigns will also need to be run, to ensure that the public is equipped with critical thinking and fact checking skills.
The strength of AI models is directly linked to the strength of the information and media ecosystem. As AI companies create major disruptions in our global communication environment, they would do well to consider auxiliary impacts of their actions, if only to ensure that continual model improvement continues to be possible in the longer term. This of course has to be couched within a larger imperative, to build technological innovations that are responsibly developed and safely deployed, with outcomes that benefit society as a whole.








.avif)