No, AI, I do NOT think nuclear war is a good thing
It has been prominently pointed out that Google's AI thinks a nuclear war could be GOOD for society. However, this actually my fault. But it highlights yet more flaws with AI, rather than me.
Here’s a weird one
A month ago, extremely popular YouTube comedian Ryan George did a video about how Google, as a search engine, has become noticeably worse over time. Which is a very valid point, because it has.
He covered the various factors that have led to this, including overuse of SEO, the onslaught of sponsored posts, and so on. And obviously, he included the recent addition to the Google search engine of Google’s AI, which supposedly answers the questions people enter into Google, without them having to go to all that effort of reading the results of the search they just conducted and figuring things out from the information provided, like some sort of Neanderthal.
Except, obviously, Google’s AI doesn’t actually do that. Or it technically does, but the answers it provides tend to be alarmingly wrong.
Ryan George demonstrates this, very effectively I’d say, by Googling some no-brainer questions, and reading out the painfully wrong results1.
One of these questions is, are there any societal benefits to nuclear war?
And, lo and behold, he points out that Google AI says there are benefits to a fiery global Armageddon. Like increased human diversity through radiation mutations, and less immigration, what with every nation being enough of a blasted hellscape to discourage relocation.
Now, to clarify, I like Ryan George. He’s very successful, and for good reason. He’s really funny and smart. I count myself as one of his millions of fans. Especially of his movie pitch sketches.
But I have to say he’s wrong, here. Because, technically, Google’s AI didn’t make any of those claims.
I did.
Indeed, my name is in the video.
Google’s AI is essentially ‘borrowing’ this ‘information’, from an article from my Guardian blog, from over 10 years ago now.
Unfortunately, what it neglected to ‘borrow’, was the fact that this piece is satirical. A spoof. It was a joke, basically.
Why would I write something like this? In the Guardian science section, no less.
The context and impetus behind the piece was that, at the time, the Labour party2 was having a leadership campaign following an election loss. One of the candidates, Jeremy Corbyn, was asked in an interview whether he would ever opt to use nuclear weapons, and said no.
He was roundly criticised for this, because it seems that one of the many weird quirks of UK politics is that potential Prime Ministers have to say yes, they would use nuclear weapons, when asked directly about this doomsday scenario. And the more enthusiastically they say yes, the better. Because apparently you can’t be trusted to run a first-world nation and one of the world’s major economies without a low-key apocalypse kink. Or something.
Anyway, during my time as a regular Guardian contributor, I would often note a common viewpoint/conclusion/belief/assumption, often one held by the powers-that-be or the commenter classes, that, when looked at closely, didn’t actually make sense. And then I’d push the ‘logic’ behind it as far as I could, to highlight how ridiculous it actually is. To satirise it, basically. With science!
For instance, I once figured out a scientific mechanism via which the legalisation of same-sex marriage genuinely led to increased flooding in the UK. Because the legalisation of same sex marriage had indeed occurred recently, followed by severe floods. Inevitably, some unhinged politician genuinely did claim, in public, that the former caused the latter.
He didn’t offer any explanation for how this could have occurred, beyond “because Bible!”, so I figured I’d step up and help the poor guy out.
Honestly, it was a lot of fun, and proved to be a popular approach with readers.
In the case being discussed here, I was reacting to the stance, apparently held by many, that political leaders must be pro-nuclear war. For that to make sense, I figured I should present the evidence for why nuclear war would be a good thing. Despite the fact that it obviously isn’t.
Of course, I’m using ‘evidence’ in a very tongue-in-cheek way, here. For instance, I make it clear that people would no longer be concerned about immigration, because most people would be dead.
Also, one of the ‘benefits’ of nuclear war I highlighted was “We’d get to live in a world full of wasps”. People think cockroaches are the most radiation-resistant species. But they aren’t. It’s wasps. Because of course it is. And we’d all love to live in a world full of wasps, right?
No, we wouldn’t. That’s the joke.
If you read the whole piece and get the context, it’s obvious that I’m using humour and exaggeration to make a point, which is that ‘nuclear war would be bad, actually’.
But given the way it operates, Google’s AI has demonstrably stripped all the context and the humorous points from my work, and presented it, elsewhere, as pure fact. As objective data. Which has my name on it. And that’s since been picked up and mocked by a comedian with a very large audience.
To clarify, I’m not having a go at Ryan George, here. If anything, he’s done me a favour. Google’s AI stripped the snark from my work, and he’s put it back in. It’s also made the whole thing amusingly ‘meta’3, given how it’s someone using sarcastic mockery to emphasise the ridiculous of an idea, but targeted at something which was originally sarcastic mockery to emphasise the ridiculous of an idea.
But it’s worrying that this was necessary in the first place. And it’s even more worrying that some people could see this Google-produced ‘information’ and think that nuclear war might be a worthwhile endeavour after all.
Obviously, I don’t think that’s remotely likely. And even if it did happen, the odds of such easily influenced people being able to actually bring about a nuclear war are even more remote4.
Some might argue this is my fault. Because, why would I do jokes and satire in the science section of a reputable media source?
I confess this isn’t the first time I’ve been criticised for this sort of thing. I still occasionally get angry emails from fans of Professor Brian Cox, Donald Trump and/or internet porn enthusiasts, climate change deniers, and others who didn’t read beyond the headline.
Some of these would-be critics would get very vitriolic indeed, presumably because they’d shared my subtle spoofery with others, thinking it supported their own views and opinions, and were subsequently mocked and embarrassed by those who actually did bother to read what I’d written.
I got the blame for this outcome, of course, because, again, why would anyone be writing non-serious stuff in the science section?!?
To which my usual answer is… why not? Making information accessible and enjoyable in some way, that makes it much easier to absorb and retain. This is basic neuroscience. Accordingly, I’d argue that’s the science section is maybe the best place for such an approach, and there should be a lot more of it.
Science may have a reputation for, or the expectation of, being purely logical and sensible, but a look at the world around us suggests this hasn’t been the most effective strategy.
And that’s just regarding actual human readers. If someone’s expecting actual professional human writers to keep to very simple, rigid parameters, purely to ensure that the clueless AI that’s going to ‘acquire’ and regurgitate their work without permission or compensation doesn’t get things wrong… how about no? Followed by a string of expletives, each more vulgar and threatening than the last.
Do you want me to hold my front door open while you rob my house as well? Wouldn’t want to inconvenience anyone, would we.
Basically, all evidence suggests that the AI’s we’re currently having shoved in our faces, with all the subtly and consideration of a custard pie filled with grit and agricultural runoff, are ill-equipped to deal with even the simplest of human queries, even if the information they’re ‘relying on’ were completely sincere and no-nonsense.
When you consider that this is very much not the case, and that a great deal, if not the bulk, of information placed online by humans is laced with our creativity and humour into the mix, AI’s output is going to be even more confused.
And we won’t always have actual scientists or prominent youtubers on hand to make sense of it.
If you enjoyed that, why not consider buying my latest book, Why Your Parents Are Hung-Up on Your Phone and What To Do About It, before AI steals and mangles it.
A lot of them involve eating toxic substances. Or rocks. So yes, quite literally painful.
One of the two main political parties in the UK, if you’re not from here. Although we have others
Meta as in the concept of something being self-referential via a subtle degree of detached commentary or analysis, not the other shoddy AI merchants. We all know my thoughts on that lot.
Although, having said that, it’s not like we have the most rational people with access to the nuclear codes right now
Has anybody asked the WOPR lately?