The India summit serves as a microcosm for this global pivot [File] | Photo Credit: REUTERS If one were to map the trajectory of global AI governance, the geographic markers would tell a story of diminishing caution. When I covered the Responsible AI in Military (REAIM) summit in The Hague in 2023, it was a gathering defined by a sombre gravity, where nations convened to discuss the military applications of artificial intelligence and the urgent need for a “responsible” framework. The mood was one of containment. Since then, the diplomatic caravan has moved through Bletchley Park, Seoul, and Paris, finally arriving recently at the AI Impact Summit in India. But something fundamental has shifted along the route. I’d like to map this shift through an Index that I’ll call the “Responsibility Index” — a measure of how much weight safety and ethics carry versus speed and scale. On this index, safety is declining, and big money is on the rise. The recent proceedings in India confirm this distinct transition: the era of wondering if we should build certain things has been definitively replaced by the race to see how fast we can fund them. The India summit serves as a microcosm for this global pivot. While the rhetoric of “safety” is still included in press releases, the atmosphere has changed. The conversation has moved from the philosophical concerns of researchers to the logistical demands of industrialists. In The Hague, the stars of the show were ethicists, diplomats, and military strategists concerned with the laws of war. In the current cycle, the spotlight has been hijacked by the check-writers. The “big money” has effectively eclipsed the “deep talent.” This overshadowing of talent by capital is a crucial distinction. In the early days of the generative AI wave — which feels like decades ago but was only 2022 — the power lay with the architects of the technology. The authors of the ‘Attention Is All You Need’ paper or the early teams at DeepMind held the leverage because they possessed the rare cognitive surplus required to birth these models. Today, the barrier to entry is no longer just genius; it is a GDP-sized capital expenditure. When the primary requirement for relevance shifts from brainpower to computing power, the incentives shift from scientific rigour to return on investment. Nothing illustrates this commoditisation of intelligence quite like the rhetoric emerging from the industry’s figureheads like Sam Altman, chief of OpenAI and the face of this AI revolution. On the sidelines of India’s AI summit, Mr. Altman compared the energy use of massive data centres to the cost of training a single human being for twenty years. Such a comparison should have stopped the industry in its tracks, yet it barely registered a blip. His statement is profound, and it suggests a worldview where biological intelligence and synthetic intelligence are merely competing line items on a balance sheet. If a data centre can produce an equivalent cognitive output for a fraction of the time and money it takes to raise, educate, and train a human, the market will inevitably choose the silicon option. When human development is viewed as an inefficient trade-off compared to GPU clusters, the “responsibility” to protect human-centric systems naturally erodes. The goal ceases to be augmenting human capability and shifts toward rendering the “expensive” human obsolete for the sake of margin. This is why the Responsibility Index is falling. Responsibility is expensive. It requires friction, audits, pauses, and the occasional decision to not release a product. In the frantic atmosphere of the India summit, and the preceding summits in Paris and Seoul, friction is the enemy. The focus has turned entirely to infrastructure—energy grids, chip fabrication, and data sovereignty. The questions are no longer about the morality of the algorithm, but about the ownership of the pipe it travels through. We have officially entered the industrialisation phase of AI. Just as the industrial revolution eventually stopped worrying about the craftsmanship of the individual weaver and focused on the output of the loom, the AI revolution is moving past the “craft” of responsible coding to the brute force of scaling laws. The Hague’s REAIM summit felt like a warning; the recent summits feel like a ribbon-cutting ceremony for a runaway train. As the heavy machinery of global capital locks into place, the voices calling for a pause or a safety check are becoming quieter, drowned out by the hum of cooling fans in billion-dollar data centres. The technology is undoubtedly getting smarter, but the wisdom guiding its deployment seems to be depreciating with every new summit. Published – February 28, 2026 08:00 am IST Share this: Click to share on WhatsApp (Opens in new window) WhatsApp Click to share on Facebook (Opens in new window) Facebook Click to share on Threads (Opens in new window) Threads Click to share on X (Opens in new window) X Click to share on Telegram (Opens in new window) Telegram Click to share on LinkedIn (Opens in new window) LinkedIn Click to share on Pinterest (Opens in new window) Pinterest Click to email a link to a friend (Opens in new window) Email More Click to print (Opens in new window) Print Click to share on Reddit (Opens in new window) Reddit Click to share on Tumblr (Opens in new window) Tumblr Click to share on Pocket (Opens in new window) Pocket Click to share on Mastodon (Opens in new window) Mastodon Click to share on Nextdoor (Opens in new window) Nextdoor Click to share on Bluesky (Opens in new window) Bluesky Like this:Like Loading... Post navigation ‘Species are showing how flexible they are in adapting to climate change’ Anthropic vows lawsuit over Pentagon ban, slams ‘intimidation’