You can't run an AI fluency program on a platform that was built before AI existed
TL;DR
- 82% of enterprise leaders provide some form of AI training in 2026, yet 59% still report an AI skills gap. The problem is rarely the content. It is the platform.
- Traditional LMS platforms undermine AI fluency programs by delivering AI content through pre-AI architecture that cannot personalize pathways, adapt to individual roles, or model the AI-first workflows employees are supposed to learn.
- An effective AI fluency training program needs a platform that uses AI at every layer: personalized role-based pathways, just-in-time agent support, fast content maintenance, and community-driven peer learning.
There is a quiet credibility problem at the center of most enterprise AI fluency training programs. Organizations are spending real budget to upskill employees on AI. They are building curricula, hiring facilitators, and loading content into their LMS. And then they are delivering that training through a platform designed in 2009.
The message lands somewhere between confused and ironic. You are asking employees to adopt AI-first thinking. You are doing it through a tool that has no AI in it, requires a login most people forget, and has not meaningfully changed its interface in years.
This is not a curriculum problem. It is a platform problem. And until L&D leaders address the infrastructure beneath their programs, completion rates will keep climbing while behavioral change stays flat.
What the data actually shows
82% of enterprise leaders say their organization provides some form of AI training in 2026. Yet 59% still report an active AI skills gap. That gap is not explainable by content quality alone.
Less than 3% of the workforce qualify as what researchers call AI practitioners: employees who have integrated AI tools into their daily workflows and are seeing measurable productivity gains. 85% have no value-driving AI use case at all. And 25% do not use AI for work in any capacity, despite their organizations having invested in training programs.
The issue is not completions. Employees are finishing modules. They are watching the videos, passing the quizzes, and satisfying the compliance requirement. What they are not doing is applying what they learned.
Research from DataCamp's 2026 analysis of workforce AI training programs puts the failure mode precisely: employees complete videos and quizzes, but low practice frequency and the absence of real-world application prevent them from building true AI capability or improving business outcomes.
Only 11% of L&D leaders say they feel extremely confident in their organization's future skills-building strategy, even though 61% have already adopted or tested AI in their programs. That is not a confidence gap rooted in content quality. It reflects a deeper structural problem in how training is delivered.
The platform credibility problem
When you run an AI fluency training program through a traditional LMS, you are asking employees to think differently about how they work, while showing them exactly how your organization approaches learning: slowly, manually, with rigid paths and content that takes weeks to update.
Your LMS cannot personalize a learning path to role or function. It cannot surface the right resource when an employee is mid-task and needs an immediate answer. It cannot model the just-in-time information retrieval that AI-native work actually looks like. It presents employees with a world where learning is entirely separate from doing.
That is the opposite of what an AI literacy program is supposed to teach.
One L&D executive who spent years at a global pharmaceutical company as head of learning innovation described the experience directly in a conversation with Disco:
"Most of us have non-AI-native learning platforms, and that's a problem. It's always problem solving, problem solving, banging my head against the wall. I told my team before I left: please think about changing this non-AI-native platform to an AI-native platform because it would make our lives so much easier."
That experience is not unusual. It is the default for most large organizations. And it points to something the major LMS vendors resist acknowledging: bolt-on AI features do not fix a pre-AI architecture.
Docebo announced AI agents and skills intelligence at Inspire 2026. TalentLMS has published guidance on what AI skills employees need. These are useful additions to a content delivery system. But they are additions. The underlying architecture was not built to model AI-first work, support agentic learning, or adapt dynamically to how individual employees actually learn.
What an AI fluency training program actually requires
A genuine AI fluency training program is not a course about AI. It is an experience that uses AI to teach AI. That distinction has real operational consequences for the platform you choose.
Here is what the infrastructure needs to support:
Personalized pathways by role, not by cohort. Generic workforce AI training is one of the most consistent failure modes in enterprise upskilling. Teaching every employee the same content regardless of their job function creates immediate resistance and poor retention. A sales rep and a finance analyst need different AI fluency foundations: different tools, different workflow contexts, different risk tolerances. A platform that cannot differentiate between them cannot deliver meaningful fluency at scale. 23% of L&D leaders already report that learning paths are not tailored to specific roles, and that gap compounds directly into low adoption.
Just-in-time access, not just-in-case content. Employees who are building AI fluency are not in learning mode when they most need help. They are in work mode. They need to ask a question, get a precise answer grounded in the program curriculum, and apply it immediately. A learner-facing AI agent trained on your program content, live event recordings, and internal documentation bridges learning and doing in the moments that actually count.
Social learning and peer connection. AI fluency is not built in isolation. It develops through peer observation, shared prompting experiments, and cohort-level discussion. Platforms that strip social context from learning reduce AI training to information delivery. Research on cohort-based learning consistently shows higher engagement and stronger behavioral transfer compared to self-paced solo courses. For AI fluency specifically, watching how peers apply tools and what is actually working in their workflows is a significant driver of adoption across a team.
Content that can be updated in hours, not weeks. AI tools and workflows are changing faster than any content team can manage through manual authoring. An AI fluency program built on a platform that requires a course designer to rebuild modules every time a tool or workflow changes will be perpetually out of date. The platform needs to make content maintenance fast and low-overhead, or the program becomes a liability inside of six months.
Outcome data tied to behavior, not just completions. Organizations investing in AI fluency programs face increasing pressure to connect training spend to business outcomes. Completion rates satisfy a compliance requirement. They do not answer the question executives are asking: is this changing how people work? A platform that tracks engagement across live events, AI agent interactions, peer discussions, and structured coursework builds an outcome picture that goes well beyond the quiz score.
Why L&D leaders can't prove ROI without the right infrastructure
The pressure on L&D teams to demonstrate measurable impact from AI fluency investments is real and growing. Boards and executive teams are asking for proof that training dollars are translating to changed behavior, not just certifications issued.
The problem is that most platforms were not built to connect those dots. When the LMS cannot differentiate by role, cannot surface learning in the flow of work, and cannot generate reports tied to behavior change, L&D leaders cannot answer the question their executives are asking.
Only 11% of L&D leaders report feeling extremely confident in their future skills-building strategy, even among those already running AI programs. The confidence gap is a data gap. And the data gap is a platform gap.
An AI-powered learning platform built for this use case generates a fundamentally different kind of outcome data: which learners are using the AI agent and what questions they are asking, which cohort discussion threads are driving the most engagement, which modules are being replayed versus abandoned, and how learning activity correlates with on-the-job behavior over time. That is the evidence base L&D teams need to make the case for continued investment.
For organizations navigating the AI skills gap in education contexts, the infrastructure problem looks similar beneath the surface. We covered that version in Building an AI fluency program for educators when your district has no federal budget left.
The platform is the program
This is the insight that the major LMS vendors resist because it calls their core product into question: in an AI fluency training program, the platform is not a delivery mechanism. It is a signal.
Employees in your AI fluency program are watching how your organization uses AI. They are watching whether your learning platform models the behavior it is asking them to adopt. When the platform itself is a pre-AI tool with an AI feature layer applied on top, the program loses credibility before the first module loads.
An AI-native learning platform does not just deliver AI content more efficiently. It demonstrates AI-first thinking at every interaction. The course builder uses AI. The learner agent uses AI. The pathway logic uses AI. Employees are not being told what AI-first work looks like. They are experiencing it.
Candice Faktor, a learning operator who has built programs across dozens of organizations on Disco, made the observation clearly: "AI upskilling is the number one use case across Disco customers. Whether you're doing it for sales, for healthcare, or for consulting, the content of the program is AI upskilling. That's what everybody is trying to figure out how to do best for their specific vertical."
The demand is clear. The platform gap is equally clear. Most organizations are trying to meet the first problem with infrastructure that deepens the second.
What this looks like in practice
Organizations using Disco to run enterprise AI fluency programs are building experiences that combine structured cohort courses with a persistent Ask AI agent trained on program content, live event recordings, and role-specific resources.
Employees can complete a module on a Tuesday, hit a workflow question on Thursday, and get a precise answer sourced from the program curriculum in seconds, without leaving the platform or opening a general-purpose search engine. The agent cites sources, links to the relevant recording or lesson, and keeps the learning tethered to the program.
New content is generated using Disco's AI program builder, which allows L&D teams to bring existing materials, PDFs, and third-party links into a course structure and generate a full curriculum in minutes. Updates that would take a traditional content team weeks take hours.
Community is built in. Live events, channel discussions, peer directories, and shared resources keep cohort connection active between synchronous sessions. The result is an employee AI upskilling program that does not just teach AI. It runs on it.
Where to start
If your organization is serious about building genuine AI fluency in your workforce, start by auditing your delivery infrastructure before touching the curriculum.
Ask three questions about your current platform:
One: Can it deliver a different learning path to a sales rep than to a finance analyst without manual reconfiguration for each?
Two: Can it answer a learner's question about the program curriculum at 2pm on a Wednesday without a human facilitator in the loop?
Three: Can your content team update a lesson to reflect a tool change without rebuilding the entire module?
If the answer to any of these is no, the platform is the constraint. Better content will not fix it.
Disco is a purpose-built AI fluency training platform designed for cohort-based learning, community connection, and intelligent automation. To see how organizations are running enterprise AI upskilling programs on Disco, book a demo.




