by Tim Leogrande, BSIT, MSCP, Ed.S.
🗓 APR 23 2026 • 5 MIN 15 SEC READ

While reviewing applications for internship positions, MIT research scientist Natalia Kosmyna recently noticed uncanny similarities across many of the cover letters she received. They were lengthy, polished, and frequently made arbitrary connections to her research. She determined that the letters had been written by candidates using large language models (LLMs), the kind of artificial intelligence that powers chatbots like ChatGPT, Gemini, and Claude.
Kosmyna, who studies human-computer interaction, also observed that students were forgetting material more often in her classes than they had just a few years earlier. This led her to speculate about whether their cognitive abilities were being impacted by the growing use of LLMs.
The tools we use have always shaped the way we think. Search engines changed how we access information, making it possible to retrieve facts in seconds rather than storing them in memory. Researchers have long noted that people are less likely to recall specifics when they know the information can easily be found online, a phenomenon known as the “Google effect.”
<aside> đź’ˇ
Researchers worry that as people delegate more thinking to LLMs, the effects on memory and problem-solving may become even more pronounced. Students are increasingly using AI to complete tasks, and research indicates that young people may be especially susceptible to the detrimental effects of frequent AI use on critical thinking and other key cognitive abilities.
</aside>
Kosmyna wanted to learn more about the potential consequences of AI, so she and her colleagues asked 54 students at the MIT Media Lab to submit brief essays. They were divided into three groups. One group was told to use ChatGPT. A second group was directed to use Google Search, while the third group was instructed to rely solely on their own knowledge and experiences to write the paper.
As each student completed the task, their brainwaves were recorded by EEG. The essay topics were purposely left open-ended, which meant that little actual research, if any, was required to complete the assignment. Questions about loyalty, happiness, and everyday decisions were among the writing prompts.
According to Kosmyna, the findings were eye-opening. Students who relied on their own knowledge and experiences showed significantly more brain activity in multiple regions. But the ChatGPT group displayed significantly less brain activity, up to 55% less than the Google ****Search group. Specifically, there was reduced activation in the regions associated with processing information and creativity. Students who used ChatGPT also demonstrated weaker retention, as a significant percentage were unable to quote their own essay after submitting it for evaluation.

Percentage of study participants from each group who were later unable to quote from their own essay. Data Source: MIT Media Lab
Peer review is still ongoing, but this is consistent with previous research. According to a study conducted by the University of Pennsylvania, using generative AI chatbots may cause some people to experience “cognitive surrender.” This occurs when users follow the AI’s advice without question and let it take precedence over their own instincts.
According to UC Berkeley computational neuroscientist Vivienne Ming, author of Robot Proof, LLMs can be a useful tool, but only if we use them to strengthen our thinking rather than replace it. She is worried because most people don’t use AI in this way, a concern that stems from research in which Ming asked a group of students to forecast real-world events like the price of oil. She discovered that most participants simply queried AI and copied the response, and that there was little gamma wave activity in their brains, a measure of cognitive effort.
<aside> đź’ˇ
A small percentage of participants, less than 10%, used AI as a tool to collect data, which they then analyzed on their own. Compared to other participants, these students exhibited much higher levels of brain activation and made more accurate predictions.
</aside>
That distinction may be the most important one. The problem is not merely that AI can do difficult tasks for us. It is that many people use it in ways that bypass the struggle through which understanding is built. Used passively, AI can also encourage the kind of mental shortcutting that weakens retention. Used actively, it can help people test ideas, refine arguments, and think more deeply.
Ming says the goal should be a form of hybrid intelligence in which people and AI “do the hard stuff” together. Instead of letting bots answer questions for us, we should think first and use LLMs later to challenge and refine our ideas. Kosmyna agrees and recommends building a strong foundation of understanding before turning to AI. Ming also recommends creating “productive friction” by using prompts like the “nemesis prompt,” which asks AI to act as a lifelong enemy who incessantly criticizes your work.
AI isn’t dangerous because it can think for us. It’s only dangerous if it convinces us to stop thinking for ourselves. Used wisely, it can sharpen our ideas, challenge our assumptions, and expand human potential. Used carelessly, it can quietly erode retention, creativity, and independent thought. The real choice is not between humans and machines, but between convenience and cognition — and whether we are willing to sacrifice one for the other.