Microsoft says AI tools such as Copilot or ChatGPT are affecting critical thinking at work – staff using the technology encounter 'long-term reliance and diminished independent problem-solving'
AI tools might be convenient for workers, but there's a risk they'll become too reliant in the future
Using generative AI at work may impact the critical thinking skills of employees — and that's according to Microsoft.
Researchers at Microsoft and Carnegie Mellon University surveyed 319 knowledge workers in an attempt to study the impact of generative AI at work, raising concerns about what the rise of the technology means for our brains.
Concerns about the negative impact are valid, the report noted, with researchers pointing to the “deterioration of cognitive faculties that ought to be preserved”.
That referenced research into the impact of automation on human work — which found that depriving workers of the opportunity to use their judgement left their cognitive function "atrophied and unprepared" to deal with anything beyond the routine.
Similar effects have also been noticed with reduced memory and smartphones, and attention spans and social media users.
"Surprisingly, while AI can improve efficiency, it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI, raising concerns about long-term reliance and diminished independent problem-solving," researchers said.
The study noted that users engaged in critical thinking mostly to double check quality of work, and that the more confidence a worker had in the generative AI tool in question, the less likely they were to use their own critical thinking to engage with their work.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
"When using GenAI tools, the effort invested in critical thinking shifts from information gathering to information verification; from problem-solving to AI response integration; and from task execution to task stewardship," the research found.
Researchers said more work was needed on the subject, especially because generative AI tools are constantly evolving and changing how we interact with them.
They called for developers of generative AI to make use of their own data and telemetry to understand how these tools can "evolve to better support critical thinking in different tasks."
"Knowledge workers face new challenges in critical thinking as they incorporate GenAI into their knowledge workflows," the researchers added. "To that end, our work suggests that GenAI tools need to be designed to support knowledge workers’ critical thinking by addressing their awareness, motivation, and ability barriers."
Reliance on AI tools could become a big problem
All of this is a problem as Microsoft has pushed its AI-powered Copilot tools into its wider software package, a trend across the wider industry — though some workers are sneaking it into their companies without explicit approval, too.
RELATED WHITEPAPER
Beyond cutting costs, one of the long cited assumptions about AI is that it could remove routine tasks from day-to-day work — helping employees do less drudgery and shift to more creative work.
Achieving that requires finding the right balance between fully automated tasks, those with a human in the loop, and wholly human work.
Research from Stanford has suggested workers are more effective and productive working alongside an AI assistant, but also found we easily slip into overreliance on such tools, sparking compliance or too much trust in the technology.
MORE FROM ITPRO
Freelance journalist Nicole Kobie first started writing for ITPro in 2007, with bylines in New Scientist, Wired, PC Pro and many more.
Nicole the author of a book about the history of technology, The Long History of the Future.
-
Alteryx names former Salesforce, Oracle strategist as new global technology alliances leadNews The former Salesforce and Oracle leader will spearhead Alteryx’s partner strategy as the vendor targets deeper ecosystem collaboration
-
Microsoft launches Fara-7B, a new 'agentic' small language model that lives on your PCNews The new Fara-7B model is designed to takeover your mouse and keyboard
-
Microsoft quietly launches Fara-7B, a new 'agentic' small language model that lives on your PC — and it’s more powerful than GPT-4oNews The new Fara-7B model is designed to takeover your mouse and keyboard
-
Anthropic announces Claude Opus 4.5, the new AI coding frontrunnerNews The new frontier model is a leap forward for the firm across agentic tool use and resilience against attacks
-
Gartner says 40% of enterprises will experience ‘shadow AI’ breaches by 2030 — educating staff is the key to avoiding disasterNews Staff need to be educated on the risks of shadow AI to prevent costly breaches
-
Microsoft is hell-bent on making Windows an ‘agentic OS’ – forgive me if I don’t want inescapable AI features shoehorned into every part of the operating systemOpinion We don’t need an ‘agentic OS’ filled with pointless features, we need an operating system that works
-
Google blows away competition with powerful new Gemini 3 modelNews Gemini 3 is the hyperscaler’s most powerful model yet and state of the art on almost every AI benchmark going
-
Microsoft's new Agent 365 platform is a one-stop shop for deploying, securing, and keeping tabs on AI agentsNews The new platform looks to shore up visibility and security for enterprises using AI agents
-
Businesses finding it hard to distinguish real AI from the hype, report suggestsNews An Ernst & Young survey finds that CEOs are working to adopt generative AI, but find it difficult to develop and implement
-
Some of the most popular open weight AI models show ‘profound susceptibility’ to jailbreak techniquesNews Open weight AI models from Meta, OpenAI, Google, and Mistral all showed serious flaws
