Prediction #1: “Learning slop” will create a signal-to-noise problem, catalyzing demand for data provenance standards
I2IDL’s Strategic Horizons: Predictions for Data Infrastructure and Enterprise Learning Systems in 2026
The trajectory of education, training, and development technologies (like pretty much all digital technologies) shows no signs of slowing in 2026. Over the next few weeks, we’ll be sharing I2IDL’s top ten predictions for the year 2026. These are predictions that are relevant for our community of learning engineers, learning technology practitioners, data interoperability specialists, and organizations concerned with open-source infrastructure for learning and workforce data.
Our first prediction is about “Learning Slop.”
Confidence Level: 4/5 ★★★★☆
The market for AI in education is projected to exceed $32 billion by 2030, and 60% of educators already report using AI tools regularly. Organizations are rapidly deploying AI tutors, AI-authored assessments, and AI-curated learning pathways. Yet, virtually all of this arrives as a “black box,” without meaningful instrumentation, interoperability, or provenance tracking.
Meanwhile, the democratization of AI-powered content creation is already flooding the market with learning materials that look professional but lack rigor and ignore learning science principles. The rise of low-code/no-code AI development also means non-technical users will increasingly create AI tools with zero consideration for data, interoperability, or pedagogical standards.
We're entering the era of “Learning Slop”: AI-generated courses, auto-populated dashboards, and algorithmically assembled curricula that are trivially easy to produce but difficult to evaluate. School districts, corporate Learning and Development (L&D) teams, and program managers will face an unprecedented signal-to-noise problem. Distinguishing genuinely effective learning solutions from polished garbage will be nearly impossible without specialized expertise that most buyers don’t have.
The immediate market response will favor established brands. When quality is opaque, reputation becomes the primary proxy for trust. Large publishers, well-known Learning Management System (LMS) vendors, and recognized assessment providers will command premium pricing simply because buyers perceive them as “safe.”
The EU AI Act’s transparency mandates (effective August 2026) will require documentation and machine-readable marking of synthetic content, but regulatory compliance isn't the same as interoperability or quality learning design. Systems may be auditable without being able to share data meaningfully or teach subjects effectively.
These issues will eventually boil over.
Educators and administrators will face embarrassment from AI flaws in courses and materials or from botched high-stakes tests. Organizations will discover they can’t analyze, compare, or transfer learning data across AI-powered systems. This crisis will be slow-burning rather than acute: it will manifest as abandoned products, expensive custom integrations, and the inability to produce meaningful analytics. The pain probably won't be widely felt until 2028–2029, when organizations move from “deploy and experiment” to “professionalize and mature." But by then, the Learning Slop will already be entrenched.
The longer-term consequence may be constructive. Just as software supply chain opacity gave rise to the idea of Software Bills of Materials (SBOMs), the learning content crisis will catalyze demand for analogous transparency mechanisms. Let’s call them “Learning Bills of Materials” or “Data Nutrition Labels.” These would document content provenance (human-authored, AI-generated, hybrid), training data sources, assessment and pedagogy validation methods, analytics calculation methods, update history, and so on.
We predict that early movers will begin publishing such metadata voluntarily in 2026–2027 as a differentiation strategy. Standards bodies will formalize frameworks by 2028–2029. By 2030, procurement requirements, especially in government and regulated sectors, may mandate transparency documentation, following the path of SBOM regulation.
So what? For buyers, build internal expertise to assess content and analytics rigor now. Ask vendors pointed questions: How was this content developed? What validation was performed? How are metrics calculated, and why are they meaningful? Many vendors won't have good answers, but the questions signal discernment and shape the market. For vendors, get ahead of the transparency curve. Publish methodology documentation, offer provenance metadata, and position transparency as competitive advantage before it becomes a requirement. For standards bodies, this is a greenfield opportunity. The organizations that establish trusted provenance and transparency frameworks will hold significant influence over the next decade's learning technology ecosystem.