The internet, once hailed as a digital utopia of human creativity, collaboration and curiosity at its inception is now showing signs of strain, and some say it’s being choked by a rising tide of low-quality, machine-generated content. From bizarre product reviews to SEO-stuffed blog posts and eerily generic travel guides, critics argue that the web is becoming harder to navigate, less trustworthy and frankly, a whole lot more boring.
Immediately, fingers are being pointed at the mass deployment of generative AI tools that can churn out articles, images and even videos faster than you can say 鈥渁lgorithm.鈥 In a way, the blame is being shifted to technology. Technology that was developed by humans, of course, but technology nonetheless.
But, is it all doom and digital decay? Not everyone agrees. Supporters of AI content argue that these tools democratise creativity, power small businesses and fill content gaps with speed and efficiency. After all, not every product description or user manual needs to be a literary masterpiece!
However, the other side of the coin is that this is going too far, and by “over-democratising” access to to digital content, we’re actually ending up in a position in which the overall quality of content on the internet has decreased dramatically.
What Exactly Is AI Slop, and Why Would It Be a Problem If It Took Over the Internet?
鈥淎I slop鈥 is the nickname critics have given to the growing wave of bland, repetitive and often misleading content churned out by generative AI tools. It’s a reference to things like generic blog posts that all sound the same, product reviews that don鈥檛 say much or AI-generated news stories with no clear source or substance. It鈥檚 not that all AI content is bad – far from it, in fact. But the problem is, when it’s pumped out at scale with little oversight or originality, things can start to feel a bit…sloppy.
The term is a direct reference to the “slop” that’s fed to pigs – scraps of food that create a kind of semi-liquid mush of food for farm animals that’ll eat exactly anything and everything. The use of the term “slop” with regard to AI is really asserting that some of this AI content is just a big mish-mash of low-quality content with little to no value. So, does that make us, consumers of the internet, the pigs? Well, maybe, put perhaps that’s diving too deep into the metaphor.
The real concern is what this flood of mediocre content might do to the internet as a whole. If search results are clogged with AI-written filler, it gets harder to find accurate information. Smaller creators may struggle to compete with the speed and volume of machine-made material. There’s also the risk of trust erosion – when you can鈥檛 tell if a photo, article or review is genuine, how can you rely on it?
At its worst, AI slop could turn the web into a noisy, soulless place where quality and nuance get drowned out by algorithms optimised for clicks. That鈥檚 not just annoying, it鈥檚 actually a very real threat to how we share knowledge, make decisions and stay informed.
We wanted to hear the opinions of experts in various fields, all of whom have an intimate knowledge and understanding of the internet and digital content. We asked them what they think of the idea of AI slop polluting the internet, whether or not it’s a big problem that we should be concerned about it and what we could potentially do to prevent this from getting worse or mitigate the problem even slightly.
Here’s what they said.
More from Artificial Intelligence
- Taiwan’s TSMC Profits Set To Surpass 50% Thanks To AI Chip Demand
- Google And Intel Deepen AI Chip Ties, Indicating That AI Isn’t Just About GPUs Anymore
- The ICO Just Weighed In On AI Agents And Data Protection, Here Is What UK Startups Need To Know
- Sam Altman鈥檚 Robot Tax Plans: What Does It Actually Mean And Who Would It Affect?
- In The AI Age, Do You Still Need To Spend Money On Expensive Phone Cameras?
- Meet Muse Spark, Meta’s AI That Knows You Better Than You Know Yourself
- Mallory Launches AI-Native Threat Intelligence Platform, Turning Global Threat Data Into Prioritised Action
- How Is AI Being Used In Dentistry?
Meet the Experts
- Dan Chorlton: Founder of GOA Marketing
- Jo Sutherland:聽Managing Director at Magenta and AI Ethicist
- Ben Johnson: CEO of BML
- Siobhan Byrne: Co-Founder and SEO Content Director at Bonded
- Charlotte Stoel: Group Managing Director of Firefly Communications
- Mike King: CEO and Founder at iPullRank
- David Weinstein:聽Co-Founder and CEO at聽KayOS
- Chris Beer: Senior Data Journalist at GWI
- Matthew Robinson: Senior PR and SEO Strategist at Definition
- Nicola Hughes: Head of SEO Strategy at TAL Agency
- Isabel Villadolid: Lead Creative Strategist at Brave Bison
- Joshua Allsopp: Digital Content Strategist at INFINITE
Dan Chorlton, Founder of GOA Marketing
Jo Sutherland, Managing Director at Magenta and AI Ethicist
“Yes, AI slop is a real problem. The internet is being flooded with low-effort, AI-generated content. Search results are worse, publishers are losing traffic, and social media is noisier than ever. And let鈥檚 be honest, 聽most of it is boring.
This isn鈥檛 just about oversaturation or poor quality. It鈥檚 about the slow erosion of originality and creativity. Everyone thinks they can write now. They can鈥檛.
The internet鈥檚 already creaking under the weight of algorithmic noise 鈥 content with no real insight, just bland repetition and robotic phrasing. It鈥檚 all a bit鈥 nothingy.
And then there鈥檚 the deeper and darker issue. As AI-generated images and videos become harder to distinguish from the real thing, we face the 鈥渓iar鈥檚 dividend鈥 – 聽a world where even genuine content is dismissed as fake. If we鈥檙e not careful, we鈥檒l end up with bots shouting into a void, while the rest of us tune out entirely.
As communicators, we have a duty of care to call out lazy, derivative, or just plain irritating uses of AI. We need to protect the craft of storytelling and stop outsourcing creativity to models that rehash other people鈥檚 words, almost always without credit or renumeration.
AI can be an incredible tool. But not if we let it swamp the ecosystems we depend on for information and connection. We need better training. Not just prompt writing tips, but genuine AI literacy.”
Ben Johnson, CEO of BML
![]()
“As AI gets better at faking it, trust becomes the ultimate currency. We鈥檙e learning to spot the difference between a message crafted by a machine and one that comes from a real person. We鈥檙e seeking out brands that show their human side, imperfections and all.
Maybe the future of marketing isn鈥檛 more data, more automation, or more 鈥減ersonalisation.鈥 Maybe it鈥檚 about being brave enough to be real. Maybe it鈥檚 about showing up, face-to-face, pen-to-paper, heart-to-heart.
The Takeaway
So, as the AI slop keeps rising, maybe the smartest move isn鈥檛 to shout louder, or more and more frequently. Maybe it鈥檚 to step away from the noise, look someone in the eye, and say something real. In a world obsessed with artificial intelligence, authenticity might just be the most disruptive force of all.”
Siobhan Byrne, Co-Founder and SEO Content Director at Bonded
鈥淭he internet being polluted by AI slop is a problem we are facing now and will only get worse in the future. The abundant text and visual-based models that are out there suck up vast amounts of pre-existing content to train their models, and stark biases exist within content online today. Using AI-powered tools to churn out content based on what already exists on the internet will perpetuate these biases at scale, with little to no care for fact-checking and truth. We will reach a point where models are trained on more ai-generated content than original content, which can lead to model collapse.
Content publishers should be transparent about their content production processes, be clear on the sources they are citing and undertake thorough fact-checking with trusted sources and experts before launching into hyper-processed content creation, which AI so temptingly plates up for us, with a few easy prompts.鈥
Charlotte Stoel, Group Managing Director of Firefly Communications
Mike King, CEO and Founder at iPullRank
![]()
“The internet is absolutely being flooded with AI slop and it’s not just a glitch, it’s a strategy. OpenAI attacks Google on two fronts: first, by redefining how people satisfy information needs through conversational AI, and second, by polluting Google鈥檚 index at scale with synthetic content that degrades the quality of traditional search results. It鈥檚 a full-on relevance war.
The problem isn鈥檛 just bad content, it鈥檚 the collapse of trust in what鈥檚 real. If we don鈥檛 intervene, we鈥檙e looking at an internet that鈥檚 less useful, less human, and dangerously manipulated. The solution isn鈥檛 banning AI; it鈥檚 investing in systems that reward originality, penalize manipulation, and surface genuinely helpful information. Platforms must take accountability for their role in this ecosystem聽and we as creators must raise the bar. Otherwise, we鈥檙e all just training data for the next wave of noise.”
David Weinstein, Co-Founder and CEO at聽KayOS
![]()
Is this a big problem we’re facing?
Yes – but not just because it makes Google worse. What we’re seeing is the early stage of a much deeper cognitive shift. Generative AI has made it effortless to flood platforms with synthetic content that mimics human language but lacks meaning or intent.
The scale is staggering: between 2021 and 2024, fake AI answers on Quora rose 258 percent; AI-generated Temu reviews jumped 1,361 percent between 2020 and 2024; and over 40 percent of Facebook posts are now estimated to be AI generated. This isn鈥檛 just more noise – it鈥檚 content detached from source, purpose and memory.
We鈥檙e not building a smarter web, we鈥檙e building a synthetic one. And when this becomes the dominant input for search engines and AI training data, the whole system starts to collapse in on itself. It鈥檚 not growth – it鈥檚 recursive degradation.”
Should we be concerned?
“Deeply. When human thought is shaped by low-quality signals, it reshapes how we reason. We absorb hollow patterns – summaries of summaries, reflections of reflections – and lose the ability to distinguish real insight from empty form. French philosopher Jean Baudrillard called this hyperreality: when representations no longer reflect reality but only each other. That鈥檚 what AI slop is – content that mimics meaning while severing it from truth.
This erosion of meaning also echoes Iain McGilchrist鈥檚 work on the brain鈥檚 hemispheres. He warns of a cultural drift toward abstraction and manipulation (left-brain dominance), at the expense of embodied understanding and context (right-brain thinking). Unchecked generative AI accelerates this shift.
Together, these ideas suggest we鈥檙e not just polluting the internet – we鈥檙e distorting the way people think. If the content we consume becomes synthetic, recursive, and unmoored, our cognition risks becoming the same: fast, shallow, and disconnected from reality.”
What are the future implications of the internet being overwhelmed with AI slop?
“We鈥檙e entering a dangerous recursive loop: AI-generated content floods the internet and newer AI models are trained on that same content. Over time, this leads to a compounding degradation in quality and coherence. Each cycle makes the internet less trustworthy and the models less grounded in reality. This isn鈥檛 just a data problem – it鈥檚 an epistemic one.
The web starts to resemble a hall of mirrors where information reflects itself without ever touching a source. It echoes the Dead Internet Theory – the idea that much of the internet is already synthetic, maintained by bots and automated systems with little genuine human input.
The result is a decline in content that informs, challenges or teaches. Instead, we get text that mimics structure but lacks substance. Eventually, this could erode digital knowledge, AI performance and public trust altogether.
What can we do to mitigate the negative effects?
There are two fronts to tackling this problem: design and detection. First, we need better systems for identifying content that is derivative, incoherent or purely synthetic. But more importantly, we need to rethink how we build AI from the ground up.
At KayOS, we focus on agent-based systems that are grounded in structured, contextual memory. Our agents don鈥檛 just generate content – they reason with you, learn from feedback and evolve alongside your operations. We use a purpose-built ontology to anchor meaning and ensure that outputs are tied to real goals, not just plausible strings of text.
This kind of infrastructure is critical if we want AI to support sense-making, not short-circuit it. The goal isn鈥檛 faster content generation. It鈥檚 intelligence that compounds over time, stays aligned with context and helps humans think better, not simply outsource the thinking altogether.”
Chris Beer, Senior Data Journalist at GWI
![]()
Is AI slop polluting the internet a big problem we’re facing?
鈥淎I slop鈥 is a real risk, but only when convenience and speed are prioritised over creativity.
鈥淎s GWI鈥檚聽data shows, on average, nearly half (44%) of social media users don鈥檛 mind AI-generated content. That suggests that people aren鈥檛 inherently anti-AI, and what matters more is whether the content feels relevant and thoughtful.
鈥淎I crosses over into 鈥榮lop鈥 when it鈥檚 used to churn out generic, impersonal content. But when used intelligently, to test ideas or tailor content for a specific platform, it can actually fuel stronger creative work.鈥
Should we be concerned?
鈥淔rom a brand perspective, the real concern for brands isn鈥檛 whether to use AI, but how. Those who use it with purpose and with their audience in mind are far more likely to succeed. The moment it becomes a shortcut for quantity over quality, you risk falling into the trap of 鈥楢I slop鈥.
What can we do to mitigate the negative effects of this?
鈥淏rands can stay ahead of the curve by tailoring content to the platform at hand. For example, Maybelline鈥檚 mascara CGI video was a viral TikTok sensation, but the same concept on X might have flopped. If you manage to jump on an AI-generated trend before it passes by, you could hit the jackpot.
鈥淲ith shrinking teams and tighter timelines, knowing where AI content will land well, and where it won鈥檛, helps teams prioritise better. AI can absolutely support creativity, but it has to serve the audience first, not just the algorithm. Be smart, yet creative with it, and you鈥檒l stay ahead of the game and avoid falling into the AI slop trap.鈥
Matthew Robinson,聽Senior PR and SEO Strategist at Definition
![]()
鈥淭he rise of low-quality, mass-generated AI content is a growing concern, especially for marketing and PR teams. While AI can be a useful tool for efficiency, we鈥檙e seeing quantity outpace quality across much of the internet. This flood of generic content risks burying valuable, experience-led insights and could erode trust in search results. Over time, the credibility and usefulness of online information may decline if this trend continues.
There is a real danger in letting AI define narratives without human oversight, especially when it comes to brand messaging and thought leadership. 聽To counter this, we need to double down on content grounded in EEAT principles: Experience, Expertise, Authoritativeness, and Trustworthiness. Strong editorial standards and human oversight are key. AI should enhance content creation, not replace critical thinking, originality, or firsthand knowledge. The future of the internet relies on creators using AI responsibly and keeping quality front and center.鈥
Nicola Hughes, Head of SEO Strategy at聽TAL Agency
鈥淭he internet being 鈥榩olluted鈥 by AI slop could escalate into a big problem if not monitored and managed appropriately. Mass-produced, low-quality content is no stranger to the internet, and generative AI is scanning all content extremely quickly for real-time results. While AI can be a fantastic tool to quickly obtain information, and proves very effective when used correctly, it鈥檚 important to know that it鈥檚 still merely a tool – it needs professional oversight, and it鈥檚 a nuanced conversation.
All information on the internet must be regulated appropriately, and AI is no exception. AI slop can be very harmful, academics like Wachter have coined AI Slop as 鈥榗areless speech鈥 where the data AI is pulling is essentially spam – inaccurate, overly simplified, and biased responses. What we need from AI is the opposite of this, we want objective and factual information, and high-quality content from AI otherwise we risk falling victim to falsified information and the internet being flooded with AI slop.
We should be concerned, to an extent, because the internet being polluted by AI slop could be very detrimental to the validity of information. Not monitoring AI-generated information encourages the circulation of misinformation, and as AI-generated content continues to see exponential growth, these risks will undoubtedly correlate with that acceleration. We also have to consider the ethical implications of AI slop in that the publication of material that has not been audited by a human could reproduce bias, offensive language, and infringe copyright laws. As well as providing a poor user experience if information is classed as AI slop, it can also damage brand reputation, and erode trust and credibility as sources pulled can be unoriginal, lacking in accuracy, and lacking that humanised nuance. We should always be concerned about the acceleration of technological advancement, and stay mindful and informed of the risks around the digital era.
It鈥檚 important to be informed of the implications of AI, and be proactive, both as operators and users of AI, to balance efficiency with integrity. It鈥檚 important to mitigate the negative effects of AI slop by continuously auditing and monitoring the systems, and also being mindful of how we use AI – ensuring it鈥檚 not our first and only port of call, and ensuring it鈥檚 used solely as a tool, not a dependent. As an AI-user, you can certainly self-audit these systems. As you would with any information circulating the internet, you should be aware that research is not gospel, and you should look for those original, validated pieces of information that have real human expertise. AI literacy is important for mitigating the negative effects of AI spam. Equally, operators and coordinators of AI systems must take this proactive approach of auditing systems, establishing clear use policies, and continuously evaluating the platforms offered.鈥
Isabel Villadolid,聽Lead Creative Strategist at Brave Bison
![]()
“AI slop might flood the internet with fast, cookie-cutter content, but for performance marketers, it鈥檚 not a threat. It鈥檚 a call to sharpen our edge. While some brands may lean on AI to churn out quick ads, the real winners will be the ones who think smarter, not just faster. Performance comes from clarity: laser-focused objectives, a deep understanding of your brand and audience, a mapped customer journey, and precise insight into purchase triggers. Then comes the clincher – closing the loop with ruthless performance analysis.
Without this, AI is just noise. And in a noisy world, data-backed, strategic creative cuts through. AI can speed up the process, but it can鈥檛 replicate strategic craft. To stay ahead, we need to double down on strategic insight, test relentlessly, learn fast. The future belongs to brands that blend AI鈥檚 efficiency with real human insight and data-backed creatives.”
Joshua Allsopp, Digital Content Strategist at INFINITE
![]()
“In the dark corners of the web, you鈥檒l find something called the Dead Internet Theory. For years, so the conspiracy goes, vast swathes of the online world have been replaced with artificially generated content at the expense of human users. It might sound crazy, but the stats reveal a truth stranger than fiction. Almost half of all web traffic is now due to non-human activity, and something like 10-15% of all social media accounts are actually bots. In 2022, barely 2% of social content was AI-generated, but by next yea,r it is set to be half鈥 and these are conservative estimates.
Organic content (in both senses) is already at the mercy of social media platforms and their relentlessly revenue-oriented algorithms, meaning the old engagement models just aren鈥檛 working. The saving grace, however, is that users are also becoming much more discerning about the content they choose to consume. Essentially, we鈥檙e getting better and better at sifting out the good stuff.
It鈥檚 tempting to lament the death of the internet, but really, we鈥檙e on the cusp of a golden age of content creation. AI is enabling things we didn鈥檛 think possible and breathes new life into tired old models. AI slop is just AI used poorly. If businesses and content creators want to stand out amidst this growing sea of trash, they need to get better at using these new tools. Ultimately, engaging, original, emotive and (importantly) human content will always prevail when your audience is human too.”