
A new study from the Massachusetts Institute of Technology (MIT) has found that using generative AI tools like ChatGPT can significantly reduce brain activity during writing tasks, and may impair users’ ability to retain information and think critically.
Who And Why?
The research was led by Dr Nataliya Kosmyna, a scientist at MIT’s Media Lab, and was motivated by growing concerns about the cognitive impact of widespread reliance on AI tools in education and professional life. For example, the team said they wanted to understand how large language models (LLMs) like OpenAI’s GPT-4o affect human cognition when people use them to perform tasks traditionally done unaided, such as writing short essays.
While AI is often praised for increasing productivity, Kosmyna and her team wanted to test whether that convenience actually comes at a cognitive cost. “This is not about calling AI bad,” she told The Register. “But it’s important to understand the trade-offs, especially in learning contexts.”
How the Study Was Carried Out
The peer-reviewed preprint, titled Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task, was based on four essay-writing sessions involving 54 Boston-area university students. The study divided participants into three groups:
– One group wrote essays entirely unaided (the “Brain-only” group).
– Another used traditional search engines for research.
– The third relied on OpenAI’s GPT-4o for assistance.
In each session, students had 20 minutes to write on a given topic. Participants wore electroencephalogram (EEG) headsets that monitored neural activity throughout. In the fourth and final session, some students were switched to the opposite group, so LLM users had to write unaided, and unaided writers were allowed to use AI.
What the Brain Scans Showed
The results, the researchers said, were “striking.” EEG readings revealed that the Brain-only group consistently showed the highest levels of brain activity. These participants demonstrated stronger and more distributed neural connectivity patterns associated with cognitive load, specifically in regions involved in attention regulation, memory encoding, and semantic processing.
In contrast, those using AI tools showed up to 55 per cent lower brain connectivity, with reduced activity across all EEG frequency bands. Even search engine users showed a notable drop of 34–48 per cent, depending on the task.
The metric used to assess this, known as the Dynamic Directed Transfer Function (dDTF), reflects how different parts of the brain communicate during complex cognitive tasks. The LLM group, the study noted, “elicited the weakest overall coupling,” suggesting lower engagement of executive functions and internal synthesis.
Why This Matters for Learning and Memory
The implications for learning were clear. For example, participants in the Brain-only group scored higher on factual recall tests and reported greater “ownership” of their work, meaning they were more likely to remember and understand what they had written. By contrast, LLM users struggled to recall even basic content from their own essays.
As Kosmyna explained, the use of AI may result in what the study terms “cognitive debt”, i.e., where offloading thinking to an AI tool results in weaker internal encoding of information. “The AI is doing the heavy lifting,” she said, “but that means your brain isn’t.”
Participants who switched from AI to unaided writing in the final session performed particularly poorly. When stripped of LLM support, this group exhibited what researchers called “under-engagement,” with marked reductions in both alpha and beta brainwave connectivity. In other words, having relied on AI too heavily, they struggled to switch back to independent thought.
Are There Any Benefits to Using AI for Learning?
Interestingly, participants who moved from Brain-only writing to LLM-assisted writing performed relatively well. Their brain activity remained high, and they produced more cohesive content, likely because they had already internalised the structure and subject matter.
This, the researchers say, supports a “delayed AI integration model” where learners are encouraged to first engage deeply with material unaided, and only later use AI tools to support or extend their thinking.
“Taken together, these findings support an educational model that delays AI integration until learners have engaged in sufficient self-driven cognitive effort,” the MIT team wrote in their report.
What This Means for AI Companies
For AI developers and edtech firms, the findings appear to present a potential credibility challenge. For example, while tools like ChatGPT can help streamline tasks and improve output, they may also short-circuit the very mental processes that build long-term knowledge and critical reasoning.
That said, the study does not suggest AI tools are inherently harmful, only that their role in education and professional development needs to be more carefully designed.
With growing interest from schools, universities and corporate training providers in AI-powered learning, the MIT findings may prompt calls for clearer usage guidelines and pedagogical frameworks that preserve the value of cognitive effort.
Business and Professional Users
For business users, especially those in knowledge-based sectors, the study raises pressing questions about productivity versus proficiency. Using ChatGPT to draft reports or generate marketing copy may save time, but at what cost to understanding, originality or professional development?
The study’s findings also suggest that while AI may improve speed and surface-level output, it could impair memory retention and reduce long-term mastery of content. This could have downstream effects in areas like strategic thinking, problem-solving, and even leadership development, where depth of understanding is key.
As more firms integrate LLMs into workflows, there’s likely to be a growing need for “cognitive balance” where AI tools are used not as crutches, but as scaffolds, supporting rather than replacing human effort.
Challenges and Limitations
It should be noted that, although the MIT study has gained widespread attention, it is still a preprint and not yet formally peer-reviewed. That said, its methodology is sound, and the EEG-based approach gives it added credibility compared to survey-based or observational studies.
One challenge is that the research focused on a narrow task, i.e. essay writing in an academic context. Therefore, it remains to be seen whether similar cognitive reductions occur across other task types such as coding, design, or strategic planning. Further research is also needed to examine how age, profession, or digital literacy might influence outcomes.
Also, some critics caution against drawing sweeping conclusions. As Professor Rose Luckin of University College London noted in related commentary, “The goal shouldn’t be to avoid AI, but to use it wisely. The key is metacognition—knowing when to trust it, when to question it, and how to engage critically.”
What Does This Mean For Your Business?
The findings from MIT present a clear warning about how AI tools are integrated into everyday work and learning. When individuals rely too heavily on chatbots like ChatGPT to generate content, their brains show measurable signs of disengagement. This isn’t a theoretical concern but is backed by brain scans, reduced recall, and weaker understanding of material, even when judged by both human teachers and AI systems.
For UK businesses, especially those investing in AI-driven productivity tools, the message is that efficiency gains are valuable, but they should not come at the expense of long-term knowledge development or independent thinking. Over time, a workforce that regularly outsources complex tasks to AI may become less capable of synthesising information, solving novel problems, or retaining strategic insights. In sectors that depend on deep subject knowledge, such as finance, law, consultancy, and research, this cognitive drift could affect decision quality and competitive advantage.
There are also implications for HR and training leaders. For example, if employees are encouraged to use LLMs without clear guidance, the result could be surface-level output with limited learning taking place. Onboarding, professional development, and internal knowledge-sharing may all suffer if too much cognitive effort is handed off to AI. Equally, if properly managed, businesses can use these findings to strike a better balance. Encouraging employees to first think through a problem themselves before turning to AI for comparison or refinement may protect both quality and learning outcomes.
For educators, tech developers, and policymakers, the study adds to a growing body of evidence that AI tools should not be treated as neutral helpers. Their design and usage shape behaviour and cognition, sometimes in ways that reduce intellectual ownership or capability. It highlights the need for guardrails, i.e. educational strategies that build critical thinking before AI access, interface features that encourage user reflection, and corporate policies that treat AI as a partner rather than a substitute.
The bigger challenge may be cultural. As AI becomes more ubiquitous and its outputs more polished, the temptation to default to it will grow. This study shows that without conscious effort to preserve our own cognitive involvement, we risk weakening the very faculties that made these tools possible in the first place. For any organisation that values deep expertise, the long-term costs of unchecked reliance on AI may outweigh its short-term gains.