The rapid integration of artificial intelligence into the professional landscape has created a paradoxical promise: the ability to do more while knowing less. As tools like large language models become ubiquitous in fields ranging from software engineering to data analysis, a fundamental question emerges regarding the long-term cost of our new-found efficiency. A recent study from researchers at Anthropic, titled ‘How AI Impacts Skill Formation,’ provides a rigorous look into this dilemma, revealing that the way we interact with these tools creates two distinct paths for professional development based on how one uses AI tools. The researchers studied a group of coders, dividing them into two groups—one with access to AI tools and another without—to complete a coding challenge. At the end of a 35-minute-long coding challenge, all participants were asked to take a test to check their python programming proficiency. Upon evaluation, the team found those in the control group scored higher than those in the treatment pool, suggesting a stark divide between high-scoring and low-scoring interaction patterns. It shows that while AI can accelerate the completion of a task, it can simultaneously decelerate the mind if used as a substitute rather than a supplement—an idea bordering on my earlier column on building careers in the age of AI. The treatment group path, identified as the low-scoring interaction pattern, is characterised by what researchers call cognitive offloading. In this scenario, the user treats the AI as a primary agent of execution rather than a collaborator. When faced with a complex task—such as learning a new programming library—the low-scoring participant focuses almost exclusively on the output. They delegate the heavy lifting of code generation and debugging to the AI, moving through the assignment with deceptive speed. This group often finishes tasks fast, yet their comprehension of the underlying mechanics remains remarkably shallow. Not just that, the researchers also pointed out that many in this group tended to spend more time interacting with the AI assistant, which could’ve ideally been utilised to learn a new skill. By bypassing the iterative, often frustrating process of trial and error, they inadvertently skip the very neurological “struggle” required for deep learning. For these individuals, the AI tool serves as a high-tech crutch; they reach the finish line, but their internal “muscle memory” for the skill is never built, resulting in quiz scores that plummet when the tool is removed. This contrasts sharply with the high-scoring group whose philosophical approach to AI was fundamentally different. They didn’t see AI as a replacement for their own logic but as a peer or senior. Instead of asking the AI to “write the code,” they asked conceptual questions. They sought explanations for why a particular function is used or request that the AI break down a generated snippet into its component parts. This group demonstrated a high level of cognitive engagement, maintaining an active mental model of the task at hand. While they might take longer to complete a project than the pure delegators, their retention is significantly higher. By using AI to clarify concepts and validate their own reasoning, they successfully converted AI’s data into personal knowledge. For the high-scorer, the AI is a catalyst for mastery, not a shortcut around it. The study’s findings suggest that the primary differentiator between these two paths is not the amount of manual labour performed, but the degree of mental involvement. Interestingly, the research noted that even when participants manually re-typed code instead of copy-pasting it, their learning did not necessarily improve if they weren’t mentally processing the “why” behind the syntax. This highlights a critical trap in the modern workplace: the illusion of competence. It is possible to be highly productive in the short term by following the low-scoring path of delegation, but this leads to a hollowing out of expertise. In an era where AI can handle routine execution, the value of a human professional increasingly lies in their ability to supervise, architect, and troubleshoot—skills that are only developed through the high-scoring path of engaged learning. The choice between the two ways of building skills rests on how we value our own expertise. The low-scoring path offers the siren song of immediate results and “vibe coding,” where one can produce functional work without a deep grasp of the foundations. The high-scoring path requires more discipline, demanding that we slow down to ask “how” and “why” even when a solution is just a prompt away. To thrive in an AI-augmented world, we must resist the urge to offload our thinking. By choosing the path of high-engagement, we ensure that as the tools around us get smarter, we are getting smarter alongside them. Published – February 14, 2026 08:00 am IST Share this: Click to share on WhatsApp (Opens in new window) WhatsApp Click to share on Facebook (Opens in new window) Facebook Click to share on Threads (Opens in new window) Threads Click to share on X (Opens in new window) X Click to share on Telegram (Opens in new window) Telegram Click to share on LinkedIn (Opens in new window) LinkedIn Click to share on Pinterest (Opens in new window) Pinterest Click to email a link to a friend (Opens in new window) Email More Click to print (Opens in new window) Print Click to share on Reddit (Opens in new window) Reddit Click to share on Tumblr (Opens in new window) Tumblr Click to share on Pocket (Opens in new window) Pocket Click to share on Mastodon (Opens in new window) Mastodon Click to share on Nextdoor (Opens in new window) Nextdoor Click to share on Bluesky (Opens in new window) Bluesky Like this:Like Loading... Post navigation HUL completes acquisition of Zywie ventures; KWIL shares to be traded from Feb. 16 DLF tops IndexTap’s latest HPL rankings