Wanna survive the A.I. apocalypse? Me too. I’ve been giving it some thought, and I think I have a pretty good plan.
But first, how did we get here?
Like you, I saw the announcement of ChatGPT late last year and have heard all the hype about how it’s going to change the world (or end it). Until recently, I thought of A.I. as just the next 3-D printing or Bitcoin or web3 or NFT or metaverse or fill-in-the-blank swanky technology that will change the future – something that gets a fresh hype cycle and then predictably fails to meet overzealous expectations. It was easy to dismiss the boosterism.
But just a few days ago, the latest development in A.I. changed my perspective. On Monday, Geoffrey Hinton, the so-called “Godfather of A.I.,” resigned from his post at Google, in part so that he could speak freely about the technology he helped develop, which he now regrets. (Along with a few students, he developed the initial technology that supports A.I. as we now know it.) Up until last year, he felt the type of tech that Google and OpenAI (the parent company of ChatGPT) had was still inferior to the human brain. But in just the past few months he recognized that A.I. is advancing so quickly that in some ways it is, in his words, “actually a lot better than what is going on in the brain.”
Computer science has always used human words to describe computing (e.g., memory) but it’s still eerie to note the language used to describe A.I.: neural networks, artificial intelligence, large language model, machine learning – these are words we use to identify what makes us, as humans, unique. While it seems unlikely that A.I. can fully replace a human brain (yet), the language is already supportive of such a notion.
The upside of an A.I.-infused world seems endless: five-minute term papers, novel drug discovery, hyper-personalized learning. But, as the Godfather of A.I. has recognized, the downside is moving from science fiction to nonfiction. Have we arrived at the moment in time that scientist and science fiction writer Vernor Vinge wrote about in 1993? “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.”
Since March, 27,000 concerned citizens like Hinton (and, presumably, Vernor Vinge) have signed the widely reported open letter that calls for a pause in advanced A.I. development. It states, in part, “AI systems with human-competitive intelligence can pose profound risks to society and humanity.… Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” The letter received a few weeks of headlines – and then the commercial race to advance A.I. continued, presumably unabated.
The way we survive the A.I. apocalypse is by being human. The best thing we have going for us is our humanity, a package deal that includes, yes, intelligence, but also a range of emotions, intuition, senses, community, empathy, and creativity. This is our advantage; these are characteristics that remain purely human.
A few weeks ago, I spoke with one of the smartest technologists I know, my friend Dave, who’s building a business in A.I. (Ever heard of Flippy, the hamburger flipping robot? Yeah, that’s Dave. Guy’s a genius.) I asked him to explain what he’s building in the simplest terms possible. It took him 18 minutes to tell me what his business does – partly because Dave talks a lot – but mostly because it’s an innovation that includes entirely new technologies and concepts that were unfamiliar to me. (No spoilers here – I’ll let the tech headlines share what Dave is working on when it finally makes its commercial debut.)
Dave is the reflective type, and he shared with me how the extraordinary advances we’ve seen in even the last few months have given him pause: “The latest breakthroughs – and trust me they keep coming – have caused nothing short of a personal identity crisis. I hate to admit it, but my whole life I’ve based my self-worth on being intelligent. What if human intelligence becomes obsolete? Who am I after that?”
Which brings us back to our original challenge, surviving.
The question we need to consider is: What advantages do we have that A.I., even with a nonhuman mind, doesn’t or can’t have?
In short, the way we survive the A.I. apocalypse is by being human. The best thing we have going for us is our humanity, a package deal that includes, yes, intelligence, but also a range of emotions, intuition, senses, community, empathy, and creativity. This is our advantage; these are characteristics that remain purely human.
Being human is our only hope.
But, above all, if the A.I. apocalypse comes and you find yourself in a standoff with RoboCop, the other way you survive is by finding the kill switch, which I imagine is a traditional light switch conspicuously located between RoboCop’s metallic shoulder blades.
Flip that switch and make a run for it.