What is cognitive offloading?! Letting LLMs “think” for you is not a “decision holiday”

Your TL;DR Briefing on things worth tracking — and talking about over your next power lunch. *Wink.* This time the thing is the tens of thousands of words of reporting and research we read so, well, you don’t have to.

Early internet GIF of a vibrant blue-green tropical cocktail in stemmed glass with red umbrella and lime garnish.
I went to GifCities.org, a project of the Internet Archive. I searched "brain." Somehow, this cocktail showed up? And I was like, yeah, this captures the "decision holiday" energy, baby.

The thing is: 

It’s only March but there’s already a solid contender for the word of 2025: “cognitive offloading” — what happens when people come to rely on tech like large language models (LLMs) to complete tasks from “research” to, as dating app CEOs strangely hope, “flirting” with shirtless and often faceless torsos nearby. 

The term isn’t new but it’s particularly timely now, as millions of LLM users are being sold on the potential for such tools to assist with problem-solving and decision-making. The offloading raises concerns even if your use of “AI” tools doesn’t amount to a full-blown “decision holiday,” as that one New York Times reporter claimed she did in November when she absurdly put “AI” in “charge of [her] life” for a full week. (Girl! Why?!) Reports like this survey from The Verge find that 61% of Gen Z respondents in a representative sample of the U.S. population are currently using “AI” tools instead of conventional search engines, for instance.

Three papers, in particular, have been cited frequently in recent discourse.