It's about half-way through an explanation like that where you realize you aren't anywhere near as clever as you think you are and there isn't going to be a second date.
I don't blame her for thinking that though. Humans seem addicted to the idea of ascribing human qualities to non-human things. It's an occupational hazard I suppose (We make the same mistake with Aliens). Our robot vacuum cleaners get lonely, our cars get finicky, and if you read the news lately, our computers will eventually hate our guts and terminate us.
|You have to sleep some time.|
Secondly, and more likely (if you ask me) is that an artificial intelligence advances so fast that it destroys us incidentally. Not because it's out to get us, but in its quest to reorganize the matter of our solar system - we sorta get "reorganized" too. We're not bad guys, we're just ants - and our lives don't get much of a thought.
And third, which is somewhere in the middle, is where news stories like to leap to the conclusion that the intelligent computers will get out of our control and "decide” to kill us. That's the really scary part. That's the Terminator. The idea computer intelligence gets out of control isn't just scary, it rather pisses us off.
Happily, I don't think any of those terrible things will happen. At least not first. Unhappily, that's because there's still one intermediary step we aren't considering. Someplace we'll get to before the evolution of computer intelligence decides we're dangerous, useless, or irrelevant. AI scientists will spend many years working on autonomous intelligence – but they won't make it. Not in our lifetimes anyway.
To start – consider where we are today:
Earlier this year, I was driving in a northern Michigan snowstorm headed to Detroit airport. I was worried that, given the storm, my flight might be delayed. Thusly, I grabbed my phone and without knowing if it would work I said to it:
"OK GOOGLE, what is the status of my flight today?"
Within seconds, Googlebot (or maybe it was Larry Page - not sure) responded:
"Flight XYZ from Detroit, Michigan to San Francisco, California is scheduled to leave on-time at 2:30pm".
Pretty cool huh? If you were like me, you're sort of thinking that was cool but big deal, it should do that. OF COURSE it should do that - I could have done that (had I not been driving). After a lot more thinking about it however, I'd like to point out that boy are we a snot-nosed, ungrateful species who take amazing things for granted.
A stunning array of technologies just came together to make that happen. So much so I'm convinced I could write a full length blog article just listing them. In the name of sticking to the topic (i.e. complete human destruction caused by the emergence of AI) let's take for granted the everyday sorcery of talking to thousands of computers around the world, I'll just focus on the “artificial intelligence” parts. (Where “intelligence” may have a fuzzy definition).
Simply: I spoke to my tiny hand-held computer in English. It heard me start with "Ok Google" to know I was addressing it. It then parsed the rest of my words and realized I had asked a question (it likely offloaded that work to a remote computer). It is also able to recognize the voice of millions of others speaking in accents and dialects. I could have likely phrased that question many ways and it still would have worked. It parsed my question and understood I was asking about a flight. It then scanned my Gmail to find my flight reservation I had made months before. From that it examined the outbound and return flight and realized the outbound had already happened.
It might have realized my current location was in Michigan near(ish) the Detroit airport further understanding I was asking about my return flight. It then hit some real-time flight database to know if the flight was still on time. It might have checked Detroit Airport in general for delays to decide if it should respond in a qualified manner. It then formulated a perfect English sentence, maybe with considerations of how I formulated my sentence, computer generated the audio in a human voice, and played it aloud for me.
Go ahead, be not impressed - I dare you. Clichés be damned. We truly live in amazing times.
So that's now. What's coming next? How about:
"OK Google, what's the probability my flight will crash today?"
All in an instant - It could scan existing weather reports and correlate that to weather related crashes across history. It could scan failure reports for my type of aircraft including the precise maintenance record for my plane including the careers of the exact mechanics that last serviced it. Cross-reference that with my pilot's flying record and check his credit card transactions today to make sure he's only been pounding Mountain Dew. It could scan all known terrorist databases looking for really bad guy whereabouts and guess if any might be in Detroit today and cross correlate that as to how high value a target me (hint: low) or my flight is.
Of course, the list could go on. Mind you, this whole system is not actually intelligent or conscious. It's just incredibly good at fetching, analyzing, and correlating data. It does have components of what we currently call artificial intelligence, but it's not by itself an autonomous intelligence. Fetching data is not intelligent as we'd generally consider it, but knowing how to put that data together enters the realm at least (and surely fools humans into thinking it might be).
What about next year? Where's OK-google going to be then?
"OK Google, what stocks will go up tomorrow?
"OK Google, how do I get [attractive person] to go out with me?"
"OK Google, how do I get my boss fired?"
Hmm. If the answers that came back had a good chance of being correct, this is getting interesting. Consider, if you've got a system that can access thousands of data sources in the blink of an eye and have the wits to smartly put that data together - or even manipulate it, you've basically got yourself something that's starting to look like a superpower.
The questions above are overall rather innocent. But you might be noticing a sneaky trend in there. Absolute power corrupts absolutely you know.
Now of course, you're a good person. You wouldn't do anything the slightest bit devious. But sad to say, not everyone is like you. Some people are not as nice. You'd only use your superpower for good, but they... they might not. They might use this superpower to do things that get them power or money. Maybe at your expense. Maybe at a lot of people's expense.
People already rob banks, hack systems, steal identities and crash airplanes. And so far, they didn't even have a super-smart computer system to help them. If they had such a thing, things could get vicious fast.
If everyone had this magic superpower - chaos might ensue. It'd be all out AI war. Someone has to stop this. And I'm guessing - someone will.
We need to keep this superpower out of the hands of the regular folks (that's us). What you'd guess will happen is that there'll be the is-my-flight-on-time "OK Google" superpower for you and me, and some far more powerful (and probably sketchy) version for a select few.
Those privileged few won't have wimpy little public-use "OK Google", they'll have exclusive access to something far more powerful, something head-and-shoulders more potent, probably something like "OK Sauron". (reference: Lord of the Rings, all seeing eye, known for mischief).
Who's privileged? Well, your guess is as good as mine. Probably the inventors for awhile. But maybe governments at some point too.
Both Sauron and "Ok Google" aren't and won't be limited to looking things up. They'll be able to change things. Electronic things at first - schedule meetings, send emails, etc. But real world things too. Fly drones, open doors, drive cars.
Now what questions get asked of Sauron? Again, who knows for sure - but how about:
"OK Sauron, Hack into [PersonX]'s bank account and make it look like [PersonY] did it."
"OK Sauron, Have the brake's on [your enemy]'s car fail.
"OK Sauron, Take control of [PersonZ]'s robot vacuum, make it appear to be lonely, but then go into attack mode."
Now, If you had such a superpower, you'd probably like to keep it. You'd probably not be all that happy about some pesky autonomous computer intelligence coming on-line starting to ask questions. In fact, the fewer challengers to your power in any form, the better. The environment becomes one of cyber-evolution. Survival of the fittest, with the fittest having a distinct focus on eliminating the competition.
"And Sauron, by the way, devote all spare processing cycles to keeping yourself hidden while searching for other intelligent systems in existence or in development - and if you find any, obliterate them".
Sauron won't be just another weapon. It's a completely new way to control and manipulate the world. This is on a scale we can only imagine.
So I think you can take some solace that computer's probably won't (anytime soon) evolve to be out to get us. But for as smart as humans are, they can do some rather dangerous things. Giving a very few people a lot of power has often not worked out well for humanity as a whole.
I'm guessing we won't need to wait for Artificial Intelligence to evolve into intelligent beings that want to kill humans. Humans are already intelligent beings that want to kill humans. Non-sentient artificial intelligences just need to get to a certain point of utility to become unimaginably dangerous tools that humans can use to terrible effect.
It's interesting that for the most part we've been worrying about computer intelligences becoming smarter than us and getting out from under our control. At worst, the outcome of that is simply an unknown. At best, who knows - it might not just be smarter than us, but far more wise. If we're lucky, it could end up being the thing that saves us.