This study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.
Yep, and they fuck themselves over academically because lecturers notice how their time spent in online-learning platforms doesn’t match their assessment submissions.
Students inevitably get questioned about their content, only for the lecturer to discover they don’t know shit, because they cheated. Had the student actually used it properly, they might know enough about the content to scrape by.
In any case, I’ve seen this happen five times lol. One of them because my lecturer asked one of my classmates what ‘frivolous’ and ‘multifaceted’ meant, and fumbled before saying they used a thesaurus.
She was then asked in plain speech what she intended to say, and ended up with an “I don’t know” - boom. Academic integrity compromised, investigation into her Learnline metrics, and cross referencing her work from two years earlier. Termination of her course followed two weeks after.
Most students use it; the lecturers know this. The difference is whether people use it as a tool, or a replacement.
In any case, essays are supposed to be a metric of knowledge and evidence of independent research. In practice? A good essay really only reflects one thing - the student is good at writing essays. I know people in early childhood education who suffered through university, who have more intuition and emotional intelligence than people who got by on academic prowess.