The AI revolution was supposed to unleash human potential.
Instead, it's unleashing an avalanche of mediocrity.
Marketing publishes bland posts that generate no engagement.
Sales reps create generic proposals that don’t close.
Employees sending wordy emails that don’t move things forward.
The promise was transformation. The reality is frustration.
Far from getting breakthrough results, we're now seeing something else entirely:
AI Slop: low-quality, generic AI outputs that create more noise than value.
Companies empowered employees with AI, believing that access alone would drive transformation.
But many of these AI strategies are backfiring.
We're Drowning in AI Slop
Recent studies show that enterprise AI initiatives are failing across the board:
MIT Media Lab reports that 95% of enterprise GenAI pilots fail to generate measurable financial returns.
BCG found that 74% of companies have yet to show tangible value from AI initiatives.
NTT Data puts the failure rate between 70–85% of current GenAI projects—far higher than typical IT efforts.
Gartner predicts that at least 30% of AI projects will be abandoned by the end of 2025 due to poor data quality, inadequate risk controls, or unclear business value.
Four studies, same conclusion. Despite massive investment, most AI work is going nowhere.
Companies are producing AI slop instead of AI success.
Three Traps That Lead to Slop
I’ve long argued that the key element of any AI strategy is to empower employees.
People know their own workflows. They’re best at spotting opportunities to automate, augment, and accelerate.
Empowering them with AI tools also drives greater employee engagement, and curbs their anxiety about being replaced.
But empowerment without an effective implementation strategy creates AI slop through three avoidable traps.
The Context Trap
When employees use AI tools without grounding in the business context, they're asking a genius to solve problems while blindfolded.
AI needs proper context to perform well—specific knowledge about your industry, company, customer needs, and workflow requirements.
Without this foundation, even the most sophisticated AI model produces generic outputs that don't meet objectives.
Conscientious employees spend more time editing and correcting AI responses than they would have spent doing the work themselves.
And the less motivated people simply push out the generic slop.
The Prompting Trap
Many organizations are training employees in "prompt engineering"—the art of crafting precise AI instructions.
There are at least three good reasons not to focus on prompting:
It's too slow. Training people in this new skill takes a lot of effort, and a long time.
The tech is moving too fast. Tools requiring smart prompts yesterday are foolproof today.
It's unsustainable. Expecting every employee to become a prompt engineer is like needing every driver to be a mechanic.
You don't fix a systemic problem by training everyone to work around it.
The Centralization Trap
Cynical about training, some companies swing to IT-led implementations. This backfires in a different way.
IT teams and AI consultants lack the nuanced understanding of daily workflows that determine whether an AI intervention actually helps or hinders productivity.
Top-down AI solutions can also increase employee resistance and fear, creating the exact opposite of the unleashed potential and employee engagement that companies want.
The result: High investment, minimal value, and a growing AI anxiety among the people who should benefit most.
The Custom AI Solution
To avoid these slop traps, leaders need to carefully manage the three fundamental levers that determine whether AI produces value or noise:
Model – the performance and user features of the chosen LLM.
Context – the additional knowledge the model draws from.
Prompt – the instructions that tell the model what to do with the context.
Right now, most companies are choosing the preferred model (e.g. Copilot, ChatGPT, Gemini), and then leaving the context and prompt setting up to the user.
That's like giving a race car to an amateur and expecting them to tune the engine, and develop the racing strategy.
And this is why we’re producing more slop than success.
The winning strategy is to teach employees to build custom AI—reusable programs with embedded instructions and context.
ChatGPT pioneered this in 2023, and now all the major platforms offer custom AI capabilities:
ChatGPT: GPTs & Projects
Copilot: Agents & Notebooks
Gemini: Gems
Claude: Projects
Building custom AI is the most high-leverage and powerful feature in the entire pantheon of AI tools.
But it's woefully underutilized.
Despite 12–15 million people now paying for ChatGPT and having access to custom GPTs and Projects, only about 10–15% have ever created one—and it’s estimated that fewer than 5% use them regularly.
Giving people access to premium AI tools is not enough. You also need to help them use the critical feature of custom AI.
Set Context for Success
Instead of making employees enter context on their own, custom AI can automatically embed key knowledge into the AI itself.
Companies can create comprehensive knowledge documents—company offerings, policies, process playbooks, standard data, industry insights, and workflow specifications—that every custom AI draws from.
Employees can then build specialized AIs for their unique roles while ensuring those tools pull from accurate, up-to-date company knowledge.
This centralized approach also begins to establish knowledge management practices which will become increasingly important as AI matures and companies strive to differentiate.
Make Prompting Foolproof
Instead of teaching employees to write perfect prompts every time, teach them to build reusable custom AI tools that apply smart instructions.
It's easier, more effective, and eliminates the prompting guesswork.
Custom AI is like building a specialized "act like" prompt that defines the tool’s:
Why: Purpose and objectives
How: Process and approach
What: Deliverables and standards
Then point custom AI to your internal knowledge documents, and you’re creating tools that consistently deliver valuable results without hallucinations.
Custom AIs can also be refined, improved, and shared across teams—they become IP assets of the organization.
With custom AI, employees are empowered to build and adapt their own AI, and the two biggest sources of slop—missing context and weak prompting—are no longer left to chance.
From Strategy to Success
The path from AI slop to AI success starts with training people on how to build and use custom AI.
At Coachfully.AI, we've found the most success by hosting workshops with company leaders where each person is shown how to build a custom AI designed for their most important work.
This gets everyone aligned on the AI adoption strategy while also providing hands-on training on the best practices of custom AI.
Then we suggest that companies schedule regular check-in sessions where people can share wins, challenges, and questions.
As colleagues see new use cases, best practices, and tangible productivity gains, enthusiasm spreads throughout the organization, and individual progress is turned into collective learning and momentum.
Soon other teams want custom AI workshops, and what started as an experiment becomes your next competitive advantage.
Time to Stop the Slop
The majority of enterprise AI initiatives fail because companies either empower employees without structure, or push solutions without employee engagement.
The solution isn't better models or more training—it's smarter implementation.
Custom AI eliminates slop by embedding context and instructions into reusable tools.
Instead of expecting employees to craft perfect prompts, you teach them to create specialized AI assistants that know your business.
Companies that get this right won't just see better ROI.
They'll create workplaces where technology genuinely empowers people to do the best work of their lives.