The recent KNOLSKAPE’s L&D Predictions Report 2026 offers an uncomfortable but useful mirror. Most organisations are still in early-to-mid maturity—PoCs, pilots, and experimentation—while only a small minority have truly “productised” AI across functions. That gap matters because it explains why boards are hearing about “AI momentum” without seeing durable operational lift. A global signal makes the stakes harder to ignore. The World Economic Forum’s Future of Jobs Report 2025 notes that, on average, workers can expect two-fifths (39%) of their existing skill sets to be transformed or become outdated between 2025 and 2030. If skill disruption is accelerating, then “training completed” is not a strategy. It is, at best, table stakes—and at worst, a distraction. Two urgent shifts For boards, CXOs, public leaders, and policy makers, two shifts have become urgent. First, AI-related L&D must move from measuring content delivery to measuring application—whether learning shows up in daily work and decision-making. Second, enterprise AI must move from scattered pilots to mainstream adoption with credible ROI and governance. The two are inseparable: weak learning application keeps pilots from scaling; pilot sprawl makes learning diffuse and untethered to outcomes. Start with the measurement problem. Enterprise learning has perfected what is easiest to count: completions, hours consumed, and satisfaction ratings. In the AI-era, these are increasingly misaligned with what leadership needs to know. AI capability is not a concept employees understand once and then “have.” It’s a behaviour that must show up repeatedly—inside meetings, analysis, customer interactions, citizen services delivery, and decisions that carry risk. KNOLSKAPE’s diagnostic quietly explains why organisations get stuck. They are relatively strong at content delivery, but weaker at the mechanisms that convert learning into capability: assessment, simulations and role plays, and reinforcement. Learning often ends at completion—precisely where capability begins to form as an enabling bridge between learning and on-the-job impact. A board-level question Boards should press management with a question that cuts through “what is changing in real work,” and “what moved because of it?” If the best answer is “completion rates,” the organisation is still measuring delivery, not capability. What should replace it is a more consequential barometer—call it an Application Quotient. Not a test score, and not surveillance. A composite signal that answers whether employees are applying AI in critical workflows and whether that application is improving outcomes. The evidence already exists in most enterprises and organisations, if leaders choose to connect it. Operational KPIs show cycle time, rework, error rates, cost-to-serve, incident resolution, and customer escalations. The missing step is to link capability programmes to a small set of “work truths,” then build lightweight proof loops: manager-reviewed samples of AI-assisted outputs, adoption thresholds in priority workflows, and decision-quality markers (did teams use AI to generate alternatives, stress-test assumptions, document rationale, or surface risks?). This shift also forces a leadership reckoning: L&D cannot remain a content delivery function. It must become a performance system. And performance systems have a few non-negotiables: role-based proficiency definitions (“what good looks like” in real work), repeated practice loops under realistic constraints, and reinforcement through managerial rhythms. If reinforcement is weak, boards and leaders should not be surprised when learning fails to translate. The pilot problem Now to the second bottleneck: why pilots multiply, but value extraction is elusive. Pilots take off easily because they look like movement. They create demos, internal buzz, and pockets of innovation. But pilots often don’t scale because they don’t add up. Different functions choose different tools, vendors, standards, and architectures; each local optimisation creates enterprise integration debt. KNOLSKAPE calls out this fragmentation directly: multiple pilots across functions can produce inconsistent standards and outcomes, making enterprise-wide adoption harder. The organisations that scale AI are rarely the most flamboyant. They treat AI as enterprise infrastructure—governed, standardised where it matters, and designed to be repeatable. KNOLSKAPE points to what distinguishes scaled players: governance maturity, data readiness, workforce capability, and clear value articulation. Those aren’t “AI features.” They’re execution capabilities. This urge to fund “more experiments” and instead demand an operating model. Who owns adoption? What is the enterprise AI strategy? What standards govern data access and model usage? What qualifies a use case to scale? How will risk be monitored, audited, and communicated? When AI adoption is everybody’s job, it becomes nobody’s job—business wants outcomes, IT wants stability, HR wants capability, risk wants guardrails. Without a single adoption spine, pilot sprawl becomes the default. From use cases to value chains A practical forcing function is to shift from organising AI around isolated “use cases” to organising around AI value chains—the end-to-end flows where performance is actually created: lead-to-cash, procure-to-pay, incident-to-resolution, hire-to-productivity. Value chains force integration: shared data, cross-functional ownership, standardised tooling, and measurable outcomes. They also make ROI harder to manufacture after the fact, because value eventually appears (or doesn’t) in operating metrics. What, then, should change next—at the level boards, CXOs, public leaders, and policymakers care about? First, name the outcome correctly. Not “AI training,” but AI-enabled performance. Names shape dashboards, dashboards shape attention, attention shapes outcomes. Second, make scaling conditional on measurement. No initiative should graduate from pilot to mainstream without a declared value hypothesis and an agreed method to measure it—before rollout, not after. This is how experimentation becomes accountability. Third, rebalance learning spend from content-heavy programmes to practice-heavy pathways that mirror real work: simulations, coached application, manager-led review loops. KNOLSKAPE’s observation that simulations and reinforcement lag content delivery should be treated as a strategic risk—because AI capability is largely judgement, not knowledge. Fourth, make managers the transmission belt. If weekly managerial wing AI-assisted outputs, reinforcing guardrails, and demanding clear decision rationale, behaviour will not stick. L&D cannot substitute for management. Finally, build governance that enables speed safely. Boards and regulators will increasingly scrutinise data protection, bias controls, explainability in high-stakes decisions, and auditability. Mainstreaming without governance is a reputational crisis waiting to happen; governance without mainstreaming is bureaucracy without benefit. The aim is rails that enable responsible adoption at scale. For government and policy leaders, the implication is similar. If national productivity gains are a policy objective, incentives should upgrade from counting “people trained” and towards evidence of application: which public workflows improved, what moved in turnaround times and error rates, and how trust and accountability are being protected. India has a genuine opportunity here—not by producing more certificates, and not by launching more pilots, but by making AI adoption boringly operational: embedded in workflows, measured through application, governed with clarity, and scaled through value chains. In the AI-era, content is abundant. Capability is scarce. The organisations that pull ahead will stop celebrating learning delivered—and start managing learning applied. (Bhanu Potta is presently the Senior Advisor at Birla AI Labs, and the Founding Partner at ZingerLabs.) (Sign up for THEdge, The Hindu’s weekly education newsletter.) Share this: Click to share on WhatsApp (Opens in new window) WhatsApp Click to share on Facebook (Opens in new window) Facebook Click to share on Threads (Opens in new window) Threads Click to share on X (Opens in new window) X Click to share on Telegram (Opens in new window) Telegram Click to share on LinkedIn (Opens in new window) LinkedIn Click to share on Pinterest (Opens in new window) Pinterest Click to email a link to a friend (Opens in new window) Email More Click to print (Opens in new window) Print Click to share on Reddit (Opens in new window) Reddit Click to share on Tumblr (Opens in new window) Tumblr Click to share on Pocket (Opens in new window) Pocket Click to share on Mastodon (Opens in new window) Mastodon Click to share on Nextdoor (Opens in new window) Nextdoor Click to share on Bluesky (Opens in new window) Bluesky Like this:Like Loading... Post navigation Bengaluru continues to be India’s GCC capital: Report Muslim Personal Law Board objects to Govt’s mandate on Vande Mataram recitation