Stop lying about what ChatGPT does to our brains!
A recent post by the Facebook content aggregator 'Genius Club' made many bold claims about the effect ChatGPT has on our brains. It may be the wrongest thing ever presented online.
I’ve been asked, several times now, about my thoughts on the recent study from MIT about the effect of ChatGPT on people’s brain’s. You may have seen this yourself, it has ‘done the rounds’, as they say, being covered by countless major news outlets around the world.
However, I’ve been rather busy of late, and I genuinely suspected that the study generating all these headlines would be 99% hype, 1% substance. If I’m being generous.
But, as fate would have it, I’ve now been compelled to check out the actual study. And… yep. Called it.
Because even if you were to conclude that a study based on a mere 60 students (dropping to only 18 in the final, most intriguing stages) writing essays (a very broad, nebulous, and complex task when it comes to objectively measuring brain function) while wearing EEG monitors (a tried and tested, but nonetheless limited, means of measuring brain activity) was a robust and reliable way to determine the impact of ChatGPT use on the brain1 …you should at least wait until the study was reviewed and published before making it a global news story, right?2
Apparently I’m in the minority on that last point. Thus I personally didn’t feel like it would help to get involved with the press coverage. My gut said it would be more akin to giving my token credibility to something which already had way too much.
…but then, just today, I was presented with…THIS!

It’s a post from the Facebook Group ‘Genius Club’. And if this post is anything to go by, they’re in danger of falling foul of Trading Standards.
Basically, this post angered me. Because it’s wrong. Severely wrong. You’d have to work really hard to create something wronger, neuroscientifically speaking. Literally every aspect of it is wrong.
Let’s start with the text.
Elevated brain activity in ChatGPT users
This is how the official ‘Genius Club’ account summarises the post. This is, presumably, the take-home finding that people will want to know about.
Interestingly, the implication given is that ChatGPT use is more stimulating to the brain, than not using it. But many studies have revealed that good performance on intellectually demanding tasks is often linked with lower brain activity. Because it means the brain being scanned is good at what it’s doing, to the extent that it’s become very adept and efficient at it.
It's like someone lifting weights in the gym, screaming and groaning the while time, next to someone calmly lifting the same weights without a sound. Who would you say is 'stronger'?
So, if this study did indeed find that ChatGPT raised activity in the brain, that doesn’t necessarily mean what many would assume it means.
That’s all moot, though, because that isn’t what the study found. It found the exact opposite. Basically, ChatGPT users displayed less activity than those using a basic web search, or no online assistance at all (the other groups in the study).
I can forgive misinterpretation or misunderstanding of jargon-heavy science papers within reason3, but this is just flat out wrong.
Unless what they’re actually saying is brain activity is elevated while using ChatGPT when compared to doing nothing at all, not when compared to writing essays under other conditions.
This would be technically correct, but it’s akin to there being a race between a Cheetah, a Panther, an asthmatic hedgehog, and a Yucca plant, where the first two easily beat the third one, but the result is summarised as “Hedgehog is faster”. Technically, it was faster than the plant. But this is a wildly misleading way to phrase the result. It suggests an active attempt to mislead.
This is assuming someone’s even gone to the effort of finding a way to interpret the results in the way they like, rather than just actively making things up. Given what we’re about to see, the latter seems more likely than not.
MIT’s first brain scan study of ChatGPT users revealed shocking results
…did it, though? It basically showed that those who let a software tool do the bulk of cognitive effort on a mental task, display less brain activity than those who use their own brain to do it. Not so much ‘shocking’ as ‘entirely what you’d expect’.
Of course, you could argue it would be ‘shocking’ if the other claim made in the post, about ChatGPT causing increased brain activity, were correct. But it isn’t.
Maybe you could argue that we’re still so early into the ‘AI era’ that any results which suggest it could directly affect the brain are ‘shocking’? But after countless years of ‘dopamine addiction’ nonsense and people invoking neuroscience to condemn anything they just don’t like, this seems unlikely.
ChatGPT user V Non-ChatGPT user
Can’t really object to the straightforward labelling of subjects in an illustrative image.
…except in this particular case, both subjects are represented by the same person!
That is clearly the same subject, presented twice. That’s not how brain scanning, or research in general, works. They would need to be different people. You need to have the brain being scanned being affected by only one variable, not two (or more). Otherwise it’s like doing a drug trial where you give the experimental drug and a placebo to the same person. You might get some data from that, but it’ll be a lot harder to unpack.
Although now that we’re started, let’s look at the visuals here.
A still image of brain scans is not especially helpful
A static picture of brain activity, as presumably presented by the green blobs in the brain sections of the ‘two’ participants, doesn’t actually mean anything. Because the key word is ‘activity’.
You see a lot of this online, on dubious sites and pages, showing you, for instance, the ‘difference between a depressed and a non-depressed’ brain, then presenting two photos of brain scans side-by-side. One usually has many bright blobs of activity, the other has a bare hint of it. But given how brain scanning works, this could be images of activity in the exact same brain, taken five seconds apart.
Using still images of brain activity is like taking a photo of two horses in a field and insisting that this shows that one is faster than the other.
…what is wrong with those brains?
Seriously, something is up with the brains on display here.
For one, this is what the ‘ChatGPT user’ brain looks like.
It has holes in it! Has this person has undergone a number of neurosurgical procedures, maybe to remove amazingly evenly spaced tumours? If so, this would be a massive confounding variable into a study about brain function, and they should definitely have been removed from the study as a result.
Then there’s the brain of the non-ChatGPT user.
This is even more baffling. Let’s ignore that it's red for no discernible reason, the image used suggests the ventricles, the internal spaces of their brain, are much larger than the average person's, which is usually suggests advanced dementia. You’d think this would affect readings of brain activity, wouldn’t you.
I mean, I know that, you know that. But presumably the AI image generator behind this ridiculous post doesn’t know that. Hence this picture is what it is.
That’s not even the sort of scanning being used!
Even if you ignore all of the above, the study used EEG, not fMRI scanning. The former, while a tried and tested and useful tool, doesn't give you brain images, just levels of activity, so this image makes no sense.
This image DOESN’T EVEN LINK TO ANYTHING!
Here’s the thing that baffles me the most; I know popular accounts on Facebook are now putting links to the articles they reference in the comments, and posting a standalone image in the main post. There are man tedious algorithmic reasons for this. And I assumed that’s what was happening here. ‘Genius Club’ had created this cursed ‘infographic’ purely as a means to draw attention to the article it was hoping to direct people to.
But, no. There’s no link in the comments. This image, as painfully flawed as it is in so many ways, is the entire point of the post.
Which suggests that spreading easily disproven misleading claims, if not flat out lies, about the ‘benefits’ and positives of AI is an end in and of itself?
You might think I’m splitting hairs, and being overly pedantic about a Facebook post that isn’t meant to be taken so seriously. But I disagree. The supposed ‘Genius Club’ page has over ELEVEN MILLION FOLLOWERS! That’s a lot of people who will be deeply misinformed about how AI affects us.
Sure, a significant chunk of those followers or members or whatever will be bots, or defunct accounts, or people who never see this bilge due to the choking off by algorithms.
But if you look at the comments, you see pro-AI people crowing about how this proves that it’s good for our brains and makes us superior.
Is it, though? I sincerely doubt it, and what evidence we have suggests the opposite. But I don’t know. It’s still very early days, and there is no scientific consensus yet. We haven’t had time to arrive at one. Stuff like this ludicrous post, though, is absolutely not helping.
Feels weird to say, but a best case scenario is that it’s all due to some bored social media intern, using AI to create a pro-AI ‘meme’, without any care or concern for the inaccuracies it’s riddled with.
Otherwise, it’s an effort to spread positive propaganda about AI. Which just confuses matters further, about an already deeply confusing, and worrying, issue.
Although it could mean that the AI bubble is closer to bursting than we thought, and someone is desperately trying to keep it inflated, by any means necessary?
Maybe I’m overthinking this. But at least I can overthink it, on my own terms. That’s something.
For more insights into how technology affects our brains and development, please check out my latest book, Why Your Parents Are Hung-Up on Your Phone and What To Do About It
To clarify, this study has not been peer reviewed or accepted to a creditable publication yet. All the links to the actual study go to the arXiv pre-print archive, which is seemingly hosted by Cornell University, which confused me for a second. So, while this is good for transparency, because it shows what the researchers are doing and claiming before changes or edits are made, this isn’t an ‘official’ study yet, by normal publication standards. Covering this in the news as a definitive piece of research is like making a final TV series of Game of Thrones without there being a book to base it on. Can you imagine!
To clarify, I’m not saying it isn’t, but it’s waaaaaay too early to say for certain either way.
Although if you’re describing yourself as a genius, I’m less inclined to be so magnanimous.
More junk! It’s infuriating
Extrapolating from singular, often small, studies is wonderful clickbait, which may explain why it seems to be getting worse. Significance is a statistical concept; it claims nothing about importance or size of impact.