Sky high thinkers

Welcome to the Libryo blog

close
Welcome to Libryo’s Sky high thinkers blog, where thoughts on legaltech, sustainability, law, compliance and technology converge. Filter by topic or popular reads and share your thoughts with us and others. If you have a topic that you’d like to see covered, drop us an email: info@libryo.com

Stay in the know by receiving monthly email updates directly into your inbox.

What is AI Slop and How to Avoid It?

Written by Alex Friedmann
on January 17, 2025

The AI industry is booming around the world. Innovation is accelerating, and revenue is way up. Generative AI is now in the scale-up phase, and the tech is here to stay. But if we follow the stream all the way down, from the silicon miners, to the chip designers, to the model trainers, it’s clear not all content is generated equal. Some are gems, others are just slop.

What is AI slop?

But what is "slop", and why is it important to understand it? "AI slop" refers to low-effort, poor-quality, mass-produced AI generated content (text, images, and videos). But it can also refer to unnecessary, unwanted, or badly integrated AI features. Both can do serious harm if not managed properly.

Imagine publishing AI images for a holiday catalogue but not noticing the six-fingered Father Christmas. Or imagine deploying a new chatbot that’s so intelligent it starts informing users of new refund terms you don’t even have. There are many real-world examples of frontier AI models making toddler errors on basic tasks. Although such errors are embarrassing, they’re obvious and can be blamed on rogue "hallucination". But in a worst case scenario, generative AI might make mistakes not even an expert could detect. This happens when language models are naively integrated into complex or fast-moving domains without appropriate professional oversight. Such mistakes can incur serious financial liability and reputational harm.

In many cases, however, there’s nothing factually incorrect with slop content, it’s just not very good. Consider the following passage:

"Embarking on a journey through the dynamic landscape of AI, it's vital to delve into the vibrant tapestry of its capabilities. Arguably, the most pivotal advancements come from comprehensive solutions that seamlessly elevate user experience."

This is AI-generated text that reads like a fifteen year old found the dictionary. It’s a buzzword salad without much substance, and most people can see that. If generated content doesn’t have the right information, structure, context, and tone, then it's hard to engage with and understand. No doubt it’s tempting to offload content creation an unsupervised AI, but doing so can undermine a business' credibility, authenticity, and thought-leadership. These are valuable attributes to have in a world where it’s becoming faster and cheaper to create content than to consume it. And no-one wants to consume slop content.

Generative AI isn't always the right solution

Of course, generative AI can be incredibly innovative and valuable. But this is almost always due to limiting its application to narrowly defined problems with experts involved at all stages. To develop great AI content or features one should understand fully the problem to be solved. Sometimes the problem can't (or shouldn't) be solved with generative AI, and sometimes traditional machine learning or a simple rule-based approach is faster, cheaper and more effective.

However, in some cases there isn’t even a problem to solve. There’s just an AI solution hammer looking for an AI nail problem. This is another common form of slop, which involves rolling out AI features that users, customers, or clients neither want nor need. It’s important to understand that even though the feature itself may be very powerful and impressive, if it doesn’t solve a specific business problem then it’s worse than having no solution at all. In other words, consider the time wasted to design, develop, test and maintain the feature; consider users' annoyance and confusion; consider conclusions being drawn that the business doesn’t understand its own users, product, or service well enough. Generative AI is not ornamental — it’s a tool that must generate revenue and/or cut costs.

So what does this all mean? It depends on whether you’re buying or selling products or services that use generative AI.

Questions if you are buying AI:

If you're buying, then most AI features come bundled with an platform, website, or application that offers other features in addition. But when specifically assessing the generative AI features, there are some general questions to consider.

  • Do the AI features actually solve a problem for the business? Value-add from AI must be clearly demonstrated, and must at least be worth more to the business than its cost.
  • Can the problem be solved internally using generative AI? There are many basic copilot type tools available now, and investing in these tools and training employees to use them effectively should be the very first port-of-call in any AI acquisition strategy.   
  • How versatile are the features? All AI should offer some way to be tailored to specific business contexts and use-cases. There's no generative AI that knows your particular product or service better than you do, so it's important you have some way to make the AI your own.  
  • What kind of assurance is provided? Determine how much quality control (if any) is provided, and how much you need to perform yourself. It’s also important to know the type of mistakes the AI could make, and how liability for that will be handled.

Questions if you are selling AI:

If you're selling (i.e. producing AI generated content or developing generative AI features) there are some things to keep in mind as well.

  • Is it really necessary? Determine first whether there's actually a business problem to be solved at all, and then whether generative AI is the best solution. Ensure you understand your clients' needs and pain points well and work backwards from there, resisting pressure to implement AI merely as a box-ticking exercise. Remember also that generative AI is powerful, but much slower and often much more expensive than traditional machine learning options in the long run.
  • Are people manually involved? Experts should be consulted throughout all stages, especially if the feature or content is technical in nature. Weigh carefully the risks of unsupervised automation, and ensure there is always some degree of manual quality assurance.
  • Have you thought through all the risks? Suppose your language model access is suddenly unavailable for technical reasons, or its cost unexpectedly increases, or the model provider starts censoring information critical to your use-case, or your users' queries and content is leaked in a data breach, or your users are generating illegal/offensive content, or they're attempting to jailbreak your own integrations. Generative AI comes with new risks, and if you're adopting this technology then you must manage the risks that come with it.

Using generative AI for EHS regulations

At ERM Libryo, we are well aware of the risks and benefits of generative AI, having developed generative AI features and content since 2020. Our approach has been to prioritise embedding our deep subject-matter expertise in all of our AI features. By adopting a human-loop approach, we have configured and trained both our generative and proprietary models using expertly curated datasets and reasoning patterns. This approach has enabled us to set our high degree of quality assurance, and scale our production capabilities with decision support systems and artificial experts. This has been the key to extracting insight from tens of millions of technically complex EHS regulations in dozens of languages around the world.

As we move into 2025, it’s apparent the AI honeymoon phase is over. Due to social and economic pressures ranging from unemployment to energy security, public perception of AI advancement is uneasy at best, and openly hostile at worst. This negative perception of AI coupled with heightened sensitivity to its applications, means more people are less tolerant of AI slop than they were in the past. And yet in spite of this, the AI boom continues. Clearly, artificial intelligence remains a high-stakes game with big payouts, provided we can drop the slop.

💡 Read this next: Are LLMs the Silver Bullet for Compliance?

 

➡️ Book a demo to see Libryo in action today