AI has a lot of problems, but it's not wrecking our brains
There are countless issues, risks, and dangers presented by 'AI'. But long-term damage to our grey matter isn't really one of them. At least, not yet.

Have you ever loudly moaned about a problem, one that’s getting worse and affecting more and more people, then suddenly realised you’re actually responsible for it?
Maybe you’ve vocally complained about the increasingly bad smell in the workplace, and then, mid-rant about how unacceptable it all is, suddenly remembered the half a tuna sandwich you dropped into a kitchen vent three months ago and didn’t bother to retrieve. The smell is no less rank, but your righteous indignation now rings hollow.
I’ve been feeling like that a lot lately, regarding the many examples I’ve encountered of people insisting that a ‘new thing’, usually some form of modern technology, is “damaging people’s brains”, or words to the that effect.
As someone who has, according to reliable sources, made significant contributions to making neuroscience accessible and ‘mainstream’, I may be partly responsible for this ‘neurocreep’.
What’s neurocreep? It’s a word I just invented to describe the increasing tendency to invoke the functioning, structure, or harm done to the human brain, without valid evidence or logical justification, if you actually have even a modest understanding of how the brain works. It’s usually found when someone is attempting to convince others of the dangers of some aspect of modern tech.
So, predictably, it’s popping up a lot lately in the discourse around AI. Or LLMs, Large Language Models, as they’re not actually AI1.
A recent example that did the rounds was the article “Everyone Is Cheating Their Way Through College” in New York Magazine, which outlined the substantial impact tools like ChatGPT have had on universities and colleges. So far, so reasonable.
However, one Dr Katie Mack, astrophysicist and sci-commer extraordinaire, flagged up a particular passage in the piece, and asked me for my take on it, as it set her BS detectors tingling.
And, in fairness, rightly so.

Without quoting verbatim, it’s fair to say that the implication of this passage is that there are many concerns about AI is directly harming the brains of young people.
So, are these concerns valid?
Sort of. But then, sort of not.
AI and the brain; hindering, but not harming?

Are AI tools like ChatGPT having a massive effect on universities and how students learn? I don’t see how they couldn’t be.
Is this a big problem for higher education, and beyond? Again, it’s hard to imagine how it couldn’t be. If all the traditional methods of making students work and assessing their progress and learning have been rendered meaningless by technology, that’s bound to be ‘disruptive’, at the very least.
Is reliance on ChatGPT damaging young people’s brains?
…and here’s where I draw the line. Because, despite the claims in the piece about research that says otherwise, I’d say no. AI tools aren’t harming people’s brains.
But this isn’t to say the research cited in the segment above is wrong. Are AI tools having an objectively negative effect on what people’s/students’ brains are capable of? Quite probably. But that’s not the same thing as causing actual harm to the brain and its workings.
Put simply, the brain learns by doing. If you don’t use it to perform a particular task/function/process, the mechanisms that determine the ever-shifting structure of our brain won’t divert any resources to that particular task/function/process.
I’m always slightly wary of generalisations like ‘the brain is like a muscle’, but it’s helpful here. If you exercise a particular muscle group, it’ll get bigger, and stronger. If you don’t, it’ll remain weak. If you exercise it then stop doing so, it’ll get stronger, then atrophy. The various parts and processes of the brain are (sort of) like that.
So, yes, if one cohort of students had to regularly apply their cognitive abilities to researching and writing essays and reports, they’d be engaging and utilising all the brain processes required for such tasks. Their brains would dedicate more resources to these abilities, become more adept at them, and tests that assessed these cognitive processes would logically reflect this.
Meanwhile, if a later cohort of students relied heavily on ChatGPT to produce the same assignments, meaning a drastic reduction in engaging the relevant cognitive processes required to do those assignments, then the aforementioned tests would likely show lower scores, compared to those of the older students who had to engage use brainpower rather than ChatGPT.
But this doesn’t mean that the brains have been damaged in some tangible way. Because there’s a world of difference between haven’t learned something, and can’t learn something.
Basically, while overreliance on ChatGPT may prevent students from developing the usual cognitive abilities, it doesn’t mean they can’t. Or never will. Because the underlying processes in the brain remain unchanged.
If the students suddenly lost access to ChatGPT and similar tools, they’d have to learn how to do their assignments themselves. And they could learn. They might not like it, and it might take a while, but they’d still be capable. Because, all things being equal, their brain has not been damaged.
Look at it this way: you give two people a bag of gold coins. One decides to invest it sensibly, and the other opts invest a quarter of it and bury the rest for safe keeping. Of these two, who’s richer?
With interest rates and dividends etc, sure, it’s probably the first person. But even so, the second person is not poor! They have a bag of gold! It’s not been destroyed or melted or stolen just because they buried it. They’ve still ‘got’ it. They’d have to dig it up, get messy in the process, and it’ll take a while to catch up with the first person, but they’re still able to do this. As long as they’re willing to put the effort in.
And so it is with brains, and the effects easily-accessible tech has on them. The capacity to learn and cognitively develop isn’t damaged, it’s just underused. After all, why dig up your gold when you’ve got a robot friend who keeps paying all your bills?
Why do people think this, and why does it matter?

The tendency to assume/insist that AI and modern tech is genuinely harming our brains is an odd but persistent one.
In the article segment presented earlier in this post, the author flags up that research suggests, in the last 20 years, the Flynn Effect2 has reduced, or even gone into reverse. And what other explanation is there for this lack of intellectual advancement in younger people than the presence of awful technology like phones and AI!
It’s an emotionally compelling argument, sure. But there are other explanations for the reduced Flynn effect. Optimistic ones. I wrote a New Scientist article about it. In short, the reduced Flynn effect could just as easily be due to older people not losing their cognitive faculties, so there’s less of a stark difference between them and fresh-brained youngsters.
Why are older people experiencing enduring brain function? Maybe it’s the downstream effects of better healthcare, more cognitive stimulation throughout life, even… the omnipresence of technology. After all, technology benefiting the brains of older people is no less credible than it damaging the brains of youngsters.
But that’s not ever the default assumption. “Tech breaks brains!” seems to be the conclusion arrived at in advance, and narratives supporting it are worked out accordingly.
Why? Again, many reasons. Numerous people seem to equate abstract cognitive issues with fundamental neurological problems. I hold my hands up and admit I’ve done this often in my writing, with out even realising. Saying “Your brain does this” when I should be saying “Your mind/cognition/psyche does this”.
It may seem a minor issue, but it's like assuming every issue with your computer is a hardware problem, when it could easily be a software one. You wouldn’t insist that your processor is borked if your web browser stopped downloading things. Declaring that any tech-induced bad habit or deficit suggests fundamental brain problems is the same thing.
There’s also the common belief, as Douglas Adams astutely observed, that if something is ‘normal’ then it is ‘natural’. Ergo, if young people can’t do something ‘normal’, then what’s happening to them is ‘unnatural’. Being able to research and produce essays and reports is a useful skill to have, but it’s not ‘natural’. It’s not some fundamental trait in the brain.
You got this a while back when the Conservatives insisted that rote learning was the key to educational success. Because being able to memorise and recite sonnets is a key life skill. Apparently.
I confess to being a bit facetious there. Because overreliance on ChatGPT is less like swapping one cognitive skill for another, and more like just not applying cognitive effort at all. I confess I wouldn’t know, I’ve never used the cursed programme. Everything you read here is 100% ‘organic’.
And then there’s the more cynical explanations. Like, if you want to persuade others that you’re suspicions and paranoias about tech are right3, saying “It’s damaging brains!” is more impactful than “It’s preventing them from developing the same degree of cognitive capability in this area, but their underlying neurology remains unchanged”. If you’re more concerned with validating your suspicions than reporting the facts, I guess this is a useful strategy. Not one I’d approve of, though.
But whatever the rationale, I’m always uneasy at the tactic of attaching valid points to dubious science. Because in attempting to boost the former, you seriosuly risk undermining it.
I’m on the record as not liking AI and all the problems it presents. I do think the chaos it’s caused in the realm of education and humanities is a very serious issue that needs dealing with immediately. I am of the opinion that tech companies behind the proliferation of AI operate with a degree of immorality and sociopathy that, in an individual, would warrant Hannibal Lecter-style restraint.
And therefore, I believe that any effort to highlight the dangers and hazards of what they, and their AI creations, are doing needs to be rigorous as possible. Because if they can turn around and say “You say AI is bad and damaging brains, but here’s all the evidence which shows it isn’t”, it risks undermining the whole case against them.
That’s just how it works some times. Case in point; I’ve read the whole New York Magazine article, and while I agree with a lot of it, if you’re going to uncritically cite Jonathan Haidt in support of your argument, your right to crow about the dangers of anyone else using mindless sources to produce their conclusions is seriously compromised.
If you like books and works written by bona-fide 100% human brains, check out some of mine, including the latest, Why Your Parents Are Hung-Up on Your Phone and What To Do About It.
This is a whole other discussion. Some would argue that LLMs aren’t intelligent in any recognised sense, so shouldn’t be labelled as AI. But LLMs are just presenting the artifice of intelligence, then…
The observation that each new generation tends to be more intelligent, according to standard metrics of intellect, than the previous one.
Or, you want to tell them that their pre-existing suspicions are correct, which is a great way to generate traffic.


I do enjoy your clever use of language in framing things. During my slow reading, I get many 'Oh, yeah, that's a great way to put things'. I've never felt any need to use AI, I'd rather read and think stuff out for myself, them express in some way. Great read, thanks.
I'm glad to hear it's not wrecking my brain. 😃. I'm curious as to your thoughts as to whether AI will change *how* we think. With the internet, the ability to find information became more important than rote memorization. What will AI change?