Tuesday, November 11, 2008

A thought experiment on the end of Humanity

Pretty slick title huh? Thought of it myself.

My girlfriend used to be a high school teacher. She told me an interesting fact - she said one of her bigger issues was that sometimes when a student handed-in an assignment, it wasn't uncommon for it to be something the student simply found on the Internet, cut-and-pasted into their word processor and handed it in. If you're like most people I tell this too, you're a bit unhappy about the laziness of these students. Instead of learning something for them self, they simply used Google to find it, and (effectively) recite what they found. Pretty weak, eh?

Einstein was once asked how many feet are in a mile. He replied something like "I don't keep information in my head, when I can just open a book up" (I googled that). Einstein apparently didn't have google.

Funny thing is that when Einstein was alive he'd look up simple facts (i.e. 5280 feet) in a book. Ten years ago we had the ability to look it up on our computers. Now I can look it up on my phone that's with me at all times. What do you think is next?

Let's say that what's next is mind interface to the net. Surely, this isn't a new idea and people are working on this right now.

But think a second - what happens when we have instantaneous access to the Internet without moving a muscle. If you ask me how many feet are in a mile and I answer - you won't know if I knew it, or if I "looked it up". And at that point, it pretty much won't matter. If it takes more effort to memorize it (to my real memory) than it will be better and faster to just leave it on the Internet and grab it there whenever I need it.

Like all technology this promises to have its glitches at first - but eventually, it will be pretty reliable. And what then? Well, if our minds work like our flabby bellies, then our human memory will atrophy. We'll slowly but surely lose the ability to remember things.

We tend to describe the idea of "knowing things" as wisdom. And we tend to describe the idea of "figuring out things" (like math or connecting disparate concepts) as intelligence. A way to distinguish this is that you can be born intelligent, but you can't be born wise.

Tomorrow's Internet has the potential to fully replace wisdom. We won't be any less wise - in fact, we'll all be instantaneously super-wise. And equally-wise (which may be weirder than being super-wise). Even children.

If you think this is crazy - I argue its already happening. Those kids in my girlfriend's old class already find memorizing things to be more effort than simply googling it. As soon as they get a faster interface to that information, they'll take it.

Most people that disagree with me on this don't actually disagree, they simply fear it. It does spell a fundamental change in humanity - and that's rather frightening. Surely things will change fast. At a minimum, all business that relies on hiding information will be, ya know, gone.

But it doesn't end there.

If we all gain super wisdom, then the only mental differentiation between us is intelligence. How fast can you multiply two numbers? How many times must some explain particle physics to you before you get the relationships between the elements involved?

The first computer beat the first human chess grandmaster in 1998. We pretty much always associated chess with intelligence, but chess is actually a pretty unfair example. Humans approach chess abstractly. In some sense considering the board as a whole, processing it in parallel, and extrapolating opportunities from it. Computers work far differently. They simply examine every possible combination (with some smart algorithms to not examine useless moves) of the game from this point forward. Chess has so many possibilities that it took awhile for computers to get fast enough and computer programmers to get clever enough to search enough possibilities to beat a human.

Computer "intelligence" is likely farther off than computer "wisdom". But you're fooling yourself if you think it isn't coming. The human brain is in essence, just a machine - damage it and it stops working. Give it alcohol and it gets off kilter. Computers will reach it - maybe not computers as we know them, but computational machines will. Ray Kurzweil predicts this sometime in the 2020's or so (per the book I read anyway, he might have changed his estimate - incidentally, he predicted computers would beat a chess grandmaster in 1997 - he was off by a year).

So what happens then? To us I mean.

By that time we will have farmed out our personal memory long ago. And then, we'll start farming out our thinking. We already happily do this with calculators or spreadsheets. We all know computers kick our ass when it comes to math. Who wants to do long division anymore? Let the computer do it. We've already farmed that part of our intellect out. If you told me I could get a chip put in my head that let me do all math instantly, I'd sign up for sure.

What happens when computers can do more? I mean, literally think for us. It won't happen overnight. But just like long division and multiplication today - we'll do it little by little. As computers get smarter and smarter, and as our interface to them gets faster and simpler, we'll slowly but surely, give them our thinking tasks.

And just like the dumbification of our kids today - and just like our fat bellies and long atrophied human memory, our unused thinking capacity then gets lazy too.

What happens then? Seems like, in some sense, we sort of cease to be as we know us. We become conduits to some consciousness we created elsewhere. You can call this extinction, paradigm shift, or apotheosis - it probably doesn't much matter.

I'm not smart enough to know what happens in this borgian future - but I have a feeling, that in 20 or 30 years, I sure will be. And so will you.

Kurzweil is a great read on ideas of the future:
Age of Spiritual Machines

14 comments:

Anonymous said...

a fascinating thought experiment indeed. although the way i see it, the chess example is more in-line with the google-ing of answers instead of "intelligently" finding a solution.

intelligence itself comes from wealth of experience, subtle nuances and relationships. it's pretty damn hard to counterfeit. intelligence is better associated with the ability to be creative, and it takes a lifetime of trial and error to create.

computers need to have "existence", a persistent memory of trying things and learning consequences and the ability to extrapolate on predictions (using relationships to formulate possibilities) before we start really seeing what kind of intelligence they can form.

i do believe that a great majority of people will be "dumbed down" by machine integration, but the creative ones, that can leverage these new powers yet retain their intelligence will still prove to be the innovators and leaders as we move forward.

Matt C. Wilson said...

I'm not concerned.

IMHO, facts like feet in a mile are data. Writing a paper is a process that results in an essay. The essay is data.

The writing of the essay is code.

The bogus parts of school are the parts that give you only data - rote spelling tests, history multiple-choice, etc. The cool parts are the code, and all the one off corner cases that come with it. I before E except after C (except... except...). Never start a land war in Asia (see: Napoleon, Nazi Germany... except then there's Genghis Khan or the Japanese invasion of China).

So to the extent that kids are finding ways to shortcut the data retrieval, I'm not concerned. What school is about (or should be about) is finding ways to help kids write their own mental algorithm for accomplishing their goals. To instruct them, by examples, in building the skillsets they will need to produce new stuff (ideas, products, behaviors) and improve society. Until we have the Matrix, you can't download critical thinking skills from the internet.

It's important that teachers be vigilant about checking for (and punishing) plagiarism. But what's more important is that the teacher impart the message as to why plagiarism is a bad shortcut.

Maybe the result is that handing in essays goes away as a measure of the student's learning. Because it doesn't measure the process. Replace it with an oral report, or a debate, or the good old Socratic method. It's good code.

Josh McDonald said...

I Think what we will have is wars the like of which you've never seen, as Religion and Fear in the West fight tooth and nail to keep us exactly as stupid and short-lived as we are now.

Jeremy Manson said...

(I usually reverse your concepts of "intelligence" and "wisdom", so I'm going to dodge those terms.)

What do you mean by "think for us"? That's unclear to me, and I expect it is unclear to everyone else on the planet, too. To be more specific:

Most AI researchers I know don't think that computers will achieve any sort of meaningful human-style level of thought. Most of the ones I know think that concept is just science fiction. The AI that has been developed is largely just based on extrapolating patterns from enormous amounts of data.

Take that to its logical extreme. What do we have? Well, we have a machine that makes inferences based on statistical likelihood. That's not the same as thought. You still have to create and run the inference engine. You feed it input, and it feeds you output.

Perhaps the algorithm is self tuning. Perhaps it learns more when you give it more input. Great, but it is only learning more about the domain. It can tell you how to translate Japanese into English, or whether to offer a mortgage, but we are talking about a system that was designed very specifically to do exactly that.

This is not anything like what humans do. It is a small part of what we do when we are learning about a single domain. When we are learning, we have an ability to set goals that match our desires and to choose domains that match those goals. We have the ability to compare our input against the goals, to determine whether we are meeting the goals, and to adjust our strategies and the domains we choose accordingly.

To do any of this, you have to start with a desire. What does it even mean for a computer to have a desire? Computers don't have wants or needs or desires. I think that, in fact, most serious AI researchers treat this as a fundamentally unserious question, and spend their lives rolling their eyes when people ask them about it.

Might it be possible to automate such tasks as writing convincing essays for school? That seems reasonably likely. Can a computer make better decisions about my life than I can? That's possible too -- for example, a computer programmed with economic data is likely to tell me to avoid predatory lenders. Can a computer think? That's barely a question, let alone a goal.

As for computers replacing information storage...

When I was a teacher, I experienced the cut-and-paste phenomenon, too. This was a few years back, and things might have changed, but it was generally the students who were doing the poorest in the class, and you could generally tell that they had done it instantly, because it wasn't written in their voice.

Now, cheaters might have become more adept by now (offering C-level papers on-line, I suppose), but cheaters were always going to cheat.

That leaves the rest of us. And for the rest of us, there is another possibility -- the so-called "Outboard Brain" hypothesis. The theory runs that immediate access to all of the information in the world will actually free us up to be able to think more creatively. When I was growing up, no one ever claimed that it would be a bad idea to hand a child an encyclopedia because it would be too easy to look up the answer. The real question is, what do you do with the information when you have it?

Paul Tyma said...

Dear Dr. Manson -

I submit that you're thinking too closely to present. And too closely to our current architecture of computing.

You in some senses quantify thought which is fine, but its still defined at some level a mappable (maybe not by us) process.

Time is on the side of my argument. To say computeres will "never" think is a bold and short-sighted view but possibly defensible position(as all involved could be dead before proving it wrong).

My implication is to throw away AI as we know it. Assume technology will advance and give it liberal amounts of time.

If desire is required for thought then there is no reason to think that this won't evolve too. However, that implies we're trying to emulate human thinking. I think part of the point of the article (i.e. the end of humanity) is that we're talking about something completely different. For better or worse.

Arvind Srinivasan said...

Hi Paul,

Interesting analysis. Assuming for a second, that we get to a stage where our thinking can be farmed out to 'intelligent systems' - what would we do ?

It somehow brings the whole question of our existence to the forefront.

So far we have been given to understand that- we are to be successful, kind, do something to improve the lives of our posterity.

Would'nt we have achieved that ?even though in doing so, we would have made them 'dumber' (a funny term, considering how high a road we choose to take...)

Jeremy Manson said...

SeƱor Dr Tyma,

I thought you were talking about the next couple of decades (the 2020s). My theory always has been that if it has been the exclusive province of dreamers for the last 50 years, and that no one has any idea how to do it, it will probably continue to be like that for at least the next 20. It's like teleportation and faster-than-light travel.

Given *infinite* time and resources, and the notion of throwing away existing technology, we could probably just develop the ability to assemble a living, thinking human from individual atoms of carbon, hydrogen and oxygen (as well as the couple of dozen other trace elements). And while we're at it, evolve into noncorporeal energy beings.

I'm obviously taking it to extremes, here, but there is a serious point. The problem with projecting this far into the future is that we don't know the implications of anything we do then. When computers first came into existence, many assumed that everyone would be out of a job, because computers would do it all for us. The same thing happened throughout the industrial revolution. What people didn't realize is that mankind has the ability to adapt to transformative technologies — that they make us more productive, and provide us opportunities that the previous generations don't see.

I still don't know what you mean by thought, so I don't know if I agree with you or not. It is also hard to discuss the moral or philosophical implications of something you can't pin down with any more certainty than the word "thinking".

Anonymous said...

Dr Manson,

> Computers don't have wants or needs or desires.

Until you program it into them. I personally don't see what difference it makes whether they evolve it or someone writes it. If your emotional system runs in a VM, that's cool. I'm only interested in how it performs, eg killing all humans gets points deducted.

adsiz said...

The problem with projecting this far into the future is that we don't know the implications of anything we do then. When computers first came into existence, many assumed that everyone would be out of a job, because computers would do it all for us.

Clayton said...

Cory Doctorow had a book exploring some of these ideas - Accelerando I think it was called (of course I cild just google it since I've forgotten :-)
The characters start out with smart glasses through which they access an augmented world - when the lead guy gets mugged and his glasses stolen, he struggles to remember who he is or what he's supposed to be doing.
Personally I don't buy the human memory failure scenario - my iPhone remembers my appointments and phone numbers but that just leaves capacity for other information. We used to remember our history via shamen and the elders, now we just write it down.

Mike said...

I enjoyed this, and I completely agree with your discussion of memory trends thus far. But I would suggest that more direct integration of human memory with, for instance, the internet, will be decidedly non-trivial. Razib of gnxp had this comment about 'e-memory' which I think is right-on:

http://www.gnxp.com/blog/2009/09/e-memory-quantitative-or-qualitative.php
"But facts stored outside of our brains exist a la carte, as opposed to being embedded in a network of implicit connections. To generate novel insight these connections and networks of facts need to exist latent as background conditions underneath reflective thought."

It's difficult to predict how digital information on the net can be meshed into human memory so as to place them into our 'network of implicit connections'.

alp said...

thnks

umuts said...

ok thank you

Ralph said...

What happens when computers can do more? I mean, literally think for us.

It depends what kind of "thinking" you are talking about. My GPS already "thinks" for me about how to get home from somewhere I've never been before. My calendar software "thinks" about my appointments, peoples' birthdays, and holidays.

Offhand, I see no reason my computer should "think" about what I want to do for the weekend, or what book I want to read next, or whether to see someone again.