Facebook can read your mind! (but only if you want it to)
Meta (i.e. Facebook) has reported that they can now read people's thoughts directly from their brain. But the claims made about this technology fall far short of reality, thanks to how our brains work
It’s usually Elon Musk who’s the go-to tech billionaire for blundering into my field of neuroscience, for dubious reasons. However, he’s not the only game in town, as evidenced by recent news that Mark Zuckerberg’s Facebook1 has developed a way of allowing people to type using only their brains!
You might think it’s cool! Being able to type just by thinking the words, rather than having to go through all that arduous effort of moving your fingers around on a keyboard? That’s real science-fiction stuff! And progress and advancement is always something to be celebrated in its own right.
Alternatively, you may find this deeply alarming. Being able to ‘see’ what someone is thinking by looking directly into their brain? That opens up all sorts of terrible possibilities regarding privacy, autonomy, and more. And that’s if it weren’t being done by organisations with such dubious track records regarding individual safety and wellbeing as Facebook, and Mark Zuckerberg.
Accordingly, responses to the news of this study were… ‘concerned’, shall we say, with fears of privacy violations due to tech companies like Facebook being able to read people’s brains, and discern their inner thoughts, whether they like it or not.
But how valid are these fears? I’d argue, not very. Not just yet, at least. And they may never be. Here’s why.
Facebook can’t actually read your mind. Nor can anyone else.

From the headlines alone, you’d assume that Meta (or whoever else is exploring this technology2) have developed the ability to look directly into someone’s living brain, and literally read their thoughts, in the same way you or I would read the subtitles or captions on a video.
That’s not what’s happening, though.
I was going to do a thorough rundown of all the aspects of this particular study of Meta’s which reveal that we’re actually quite a long way off from the direct mind reading that many are afraid of. Luckily, I don’t have to, as
at has already done that.I strongly advice reading through the linked piece, but here’s a brief rundown of the salient points:
There was a lot of variation in accuracy, regarding how well the tech/AI could ‘translate’ the brain activity to actual text. At best, it was 80%, which isn’t bad at all, but only a few managed that. For most, it was much lower.
The best results, the only ones close to being practically useful, were derived from MRI scanners. This approach requires a setup involving incredibly powerful, super-cooled magnets which weigh several tons, cost $millions, and could cause severe injury if not used properly. The idea that this tech could be ‘household’ any time soon is wildly optimistic. It won’t be something your smartphone is capable of for a loooooong time, if ever.
Results from cheaper, less-involved methods (in this case, EEG, which is still far from a convenient, portable technology), were nowhere near as accurate, so proved of little use.
The study looked at people typing certain words, not just thinking them. This might not seem like a significant difference, but it is, in terms of brain activity. It means the motor system is being activated, to repeat a very specific set of physical actions. This means more brain activity will generated, and it will be a lot more consistent, i.e. more predictable and readable.
If subjects deviated from the process at all, the results quickly became useless. As in, if they didn’t think about the specific words they were typing, the results were essentially meaningless.
Basically, when you stick rigidly to the available information, what these studies actually reveal is, in a certain set of very specific circumstances and through a combination of very precise factors, prohibitively advanced and elaborate technology can be utilised to detect the words being represented in an individual’s brain activity, with a degree of accuracy that can be described as ‘decent’, in the best cases.
And here’s my issue; for all the technical limitations (and, admittedly, possibilities) that can be, and regularly are, discussed, I think it’s the latter point in the list provided above that’s key. And one that’s invariably overlooked3.
Basically, as mentioned in the title, Facebook, or any other tech organisation, can only ‘read your mind’ if you let them. If you wholeheartedly cooperate with their efforts. And this presents a massive Achilles heel to any plans to use this technology nefariously, or secretly. So much so, it’s more of an Achilles left leg.
People need to let tech companies read their mind (and, will they?)

You know those “I bought my own home in my early twenties!” articles? The ones which always, always, delay as long as possible before mentioning the massive inheritance/parental loan the supposedly economically savvy youngsters were given, to afford this home?
If someone on a major media platform were to read these articles, then publish a piece piece titled “The housing crisis is over! Homes for all!” this would be deemed… premature, at best. Foolish, even, if we’re being needlessly polite.
With the best will in the world, claims like “Facebook can read your mind! Privacy is over!” are basically doing that. It’s not that the tech isn’t impressive or that we shouldn’t keep an eye on it, but, as stated, it depends utterly on the consent of those whose brains are being assessed.
If anything, that’s underselling it. Every example of mind-reading tech I’ve seen, however impressive, has inevitably involved the AI/software responsible having extensive exposure to the subject’s brain activity when they’re reading or typing, to ‘learn’ how their brain activity corresponds to verbal/linguistic constructs.
And this is only feasible if the subject is willing to give their time, efforts, and absolute cooperation to the process. Which I doubt most people would be.
Perhaps I’m being too optimistic about my fellow humans here. You could point out that countless people have already willingly surrendered their personal details and data to tech companies. And when was the last time anyone actually read the Terms and Conditions on a purchase before clicking ‘accept’?
Good points all round. But my cynicism-infused counter would be, that those sorts of things are easy. And convenient.
“If you click this box, you can have free access to a massive online world of content and entertainment” is a compelling offer.
“If you lay in this cold screeching chamber and read simple text for hours on end, you may not have to do as much typing as you usually do” is not4.
Perhaps the technology will advance, making the procedure a lot simpler and more accessible? Can’t rule it out, I guess. But there are no signs of that happening any time soon.
And even the technology were to streamline substantially, I’d still argue it’s premature to worry about a free-for-all on access to our very thoughts, because most of the coverage and discourse lately seems to be misunderstanding, or seriously underestimating, how the brain actually works.
For instance, if enough people voluntarily let Facebook read their brains (which could certainly happen), wouldn’t whatever AI or LLM used gain enough experience to decipher the brain activity of any brain, because it has enough data to work with to effectively extrapolate from original data?
I can’t 100% rule this out, as this is beyond my remit. But in cases like this, I always come back to… fingerprints.
Let’s say Facebook, or whoever, wanted to gain access to the files on a laptop of mine, one which was biometrically coded to my fingerprint. So to achieve this, they got an AI, and loaded it with the fingerprints of 200,000 other balding Welsh nerds in their forties5, then told it to extrapolate my fingerprints from that data set.
Would that work? Presumably it would only be due to a wild coincidence, if it did. Because my fingerprints are ultimately unique to me, regardless of how many people share my physical or genetic traits.
And that’s just fairly-regular patterns of lines, in a small area of skin. The arrangement of neurons and how they develop and interact is many orders of magnitude more varied between individuals. And while I may be leaving fingerprints on every shiny surface I touch, I’m not exactly leaving patterns of brain activity behind me, like some baffling smoke trail.
Ultimately, I’d bet that any AI would have to painstakingly learn an individual’s unique activity patterns, before it could read their mind.
And let’s be honest, everybody has found themselves riding a train of thought they can’t recall the origin of, or humming a song that was dredged up from their memory for no obvious reason. The boundary between conscious and unconscious thinking is much hazier than many assume. My point is, your basic brain is so complex and convoluted that we ourselves often don’t know what we’re thinking. How is external software meant to do any better?
And that’s before we include considerations like neurodivergence, or aphantasia, or any of the other myriad quirky ways that human brains differ from ‘the norm’. When you consider that the advanced AIs in self-driving cars still struggle to recognise pedestrians and their intentions, the notion that they’re on the verge of discerning perfect meaning from the churning electrochemical froth in everyone’s prefrontal cortex does seem especially far-fetched.
I could be wrong, and it’ll ultimately turn out to be a lot more consistent, to an extent that a sufficiently advanced AI would be able to discern meaning from a unique, hitherto unencountered sample of brain activity. But this is by no means a given. Yet so much coverage seems to think it is? This doesn’t do anything but worry people needlessly. And we don’t really need that at present, surely.
One conclusion to be drawn from this is that the tech industry has taken the phrase “the brain is like a computer” far too literally. Because in so many ways, it isn’t. And until they change their approach and acknowledge this, their efforts at (exploitable) tech-brain interfaces for the masses are always going to fall short.
And that’s not exactly a bad thing. Although it makes me think I shouldn’t have written this?
The best way to discern what’s happening in someone’s mind remains to actually read a book they’ve written. Like one of mine, for example.
Technically the company is called ‘Meta’, yes. But we all know what I mean!
This research is by no means unique to the Facebook spods. Some variation has been popping up in the news for several years now.
For good reason, arguably, given how this sort of thing is reported. But even so.
This is also why I have deep reservations about Neuralink’s promises. I don’t care how many fanboys Musk has, “Let the employees who work for my increasingly unreliable companies perform actual brain surgery on you” is always going to be a tough sell for a population that still mostly objects to going tot he dentist.
I don’t know if there even are that many of us. But this is just a hypothetical, so just go with it.


