As India hosts the global AI impact summit, rhetoric about the transformative potential of AI in healthcare seems to be overshadowing critical ground realities and concerns. On February 7, a different kind of conversation on AI took place in Delhi, during the national consultation on People-led AI in Health, highlighting an alternative approach grounded in health rights, patients and providers. During this conversation, clinicians, public health experts, AI technologists, health workers, and patient-advocates gathered not to celebrate AI, but to interrogate it while exploring alternative approaches. Though AI is being projected as a major solution to India’s health problems, the reflective consultation went beyond the international hype, recognising that deployment of AI through centralised, commercially driven systems overriding rights of patients, communities and health workers may actually do more harm than good. Globally, the use of AI in health has shown some promise in specific domains such as image recognition in radiology, analytics to aid diagnosis in controlled environments, and workflow assistance. But systematic reviews repeatedly show that tools which apparently perform well in pilot settings, flounder in real-world contexts. AI is good at recognising and matching patterns, but healthcare is much more than pattern recognition — as it involves complex clinical and ethical judgements, social contextualisation of patients along with explanation and reassurance, and direct physical caring — all of which involve human relationships, not just algorithms. Imperative to protect rights The Delhi consultation raised sharp concerns about digital extractivism: who owns health data, who benefits from derived intelligence, and who bears the risks? Patients need understandable narratives and empowerment, not just being treated as sources of data. And if AI tools are trained primarily on urban, digitised populations, these may entrench caste, gender, regional, and socio-economic bias. Hence any use of AI in health must be anchored in a strongly rights-based framework. This includes the right to understand since patients and people must not merely access their health data; they must be able to comprehend it. AI systems should translate complex medical information into clear, relevant explanations which support informed decisions. The right to local processing means that sensitive health data should by default be processed locally wherever possible, rather than being centralised in corporate or state-controlled servers; cloud sharing must be explicit and revocable. The right to ongoing control implies that consent cannot be a one-time formality; individuals must be able to withdraw access to their data, and should control not just their data, but also insights generated from it. The right to equity and access means that AI systems must be audited for bias, being made accessible across regions and languages, and governed transparently to ensure that they reduce and not deepen health inequalities. AI-supported services developed with public resources should be available free at the point of use within public health systems. Non-exclusion must be guaranteed: no one should be denied care because they do not engage with AI systems; non-AI pathways in healthcare must always remain available and viable. Supplementarity to human care A core principle is that AI must supplement, not substitute, human care. AI might support documentation and data interpretation, but decisions in healthcare must remain with accountable human providers. Humans must always be in the loop for all AI-assisted functions, keeping in view that health workers and professionals are the backbone of care. In health systems already marked by precarious labour conditions, there is a real risk that AI will become a justification for staff reductions, casualisation, increased workloads, or algorithmic surveillance of ASHAs and other frontline workers. Hence approval of AI tools should require labour impact assessments, ensuring explainability for frontline workers, and explicit guarantees against workforce reduction. Any technological gains must enhance the capacity and dignity of health workers — not displace them. Political economy of AI The basic question is not whether AI can help, but who will AI serve. Current use of AI is not neutral; it is largely embedded in monopolistic profit-driven models. If deployed through commercial platforms which centralise patient data, AI risks deepening corporatisation, creating an elite layer of care, and being used to expand high-cost market expansion rather than rational access. If public data and public funds build AI systems, their primary obligation must be to strengthen public provisioning — not subsidising corporate profits. Any use of AI in India must be grounded in a health systems approach. AI might be judiciously deployed to strengthen primary and preventive care, and to empower patients, including assistance in rational drug use, improving referral systems, demystifying hospital billing, or simplifying medical information for users. But we must remember that India’s health system challenges are not primarily technical; they are political, economic and structural, including chronic underinvestment in public health, shortages of trained personnel, inadequate regulation of commercial healthcare, and high out-of-pocket expenditure. These are institutional failures which algorithms will not fix. To conclude, we should not expect technology to provide solutions to what are basically policy and systemic problems (known as ‘techno-solutionism’). Like any technology, AI must serve patients’ rights, health equity and public purpose, while health workers and professionals remain the backbone of care. Health data must above all belong to patients and people, and any derived intelligence must be accountable to them. While shaping the future of Indian healthcare, AI and various technologies can provide assistance — but people, health workers and public health must remain firmly at the centre. Dr. Abhay Shukla is a public health physician and national co-convenor of Jan Swasthya Abhiyan. Views expressed are personal. He acknowledges Surajit Nundy, the RAXA team and participants for the People-Led AI in Health consultation for their valuable ideas which informed this article. Published – February 20, 2026 12:34 am IST Share this: Click to share on WhatsApp (Opens in new window) WhatsApp Click to share on Facebook (Opens in new window) Facebook Click to share on Threads (Opens in new window) Threads Click to share on X (Opens in new window) X Click to share on Telegram (Opens in new window) Telegram Click to share on LinkedIn (Opens in new window) LinkedIn Click to share on Pinterest (Opens in new window) Pinterest Click to email a link to a friend (Opens in new window) Email More Click to print (Opens in new window) Print Click to share on Reddit (Opens in new window) Reddit Click to share on Tumblr (Opens in new window) Tumblr Click to share on Pocket (Opens in new window) Pocket Click to share on Mastodon (Opens in new window) Mastodon Click to share on Nextdoor (Opens in new window) Nextdoor Click to share on Bluesky (Opens in new window) Bluesky Like this:Like Loading... Post navigation Delhi HC seeks lawyer’s stand on Mahua Moitra’s plea over shared custody of pet dog Scientific temper to the fore at school exhibition