This is a paper for a MIT study. Three groups of participants where tasked to write an essay. One of them was allowed to use a LLM. These where the results:

The participants mental activity was also checked repeatedly via EEG. As per the papers abstract:

EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use.

  • Zozano@aussie.zone
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    5
    ·
    edit-2
    4 days ago

    Lol, oops, I got poo brain right now. I inferred they couldn’t edit because the methodology doesn’t say whether revisions were allowed.

    What is clear, is they weren’t permitted to edit the prompt or add personalization details seems to imply the researchers weren’t interested in understanding how a participant might use it in a real setting; just passive output. This alone undermines the premise.

    It makes it hard to assess whether the observed cognitive deficiency was due to LLM assistance, or the method by which it was applied.

    The extent of our understanding of the methodology is that they couldn’t delete chats. If participants were only permitted to a a one-shot generation per prompt, then there’s something wrong.

    But just as concerning is the fact that it isnt explicitly stated.