Do we “know” anything?

The June 2013 Discover Magazine has an interesting article about advances being made in computer intelligence. The article is, unfortunately, not available online, but this quote caught my eye.

Cognitive computers… will weave together inputs from multiple sensory streams, form associations, encode memories, recognize patterns, make predictions and then interpret, perhaps even act – all using far less power than today’s machine.

Now is as good a time as any to ask a question that has floated around in science-fiction philosophy circles for a long time: will computers ever think like us? Will computers, at least in their thought processes, become human?

Even in their current state, computers are quite “intelligent.” You might’ve heard of Deep Blue, the computer program that bested chess great Garry Kasparov. And don’t forget Watson, the computer that won on Jeopardy! Computers are engaging in processes that at least mimic information recall and strategizing.

The philosopher John Searle has come up with an interesting thought experiment to illustrate why computers can never really think as humans do. He proposes that you have a man locked in a Chinese prison cell. The man does not speak or read Chinese. Chinese characters are passed into his cell, and he draws from his own collection of Chinese characters to “answer.” He eventually gets pretty good at responding with the correct Chinese characters. (Theoretically this would take many lifetimes to learn but this is a thought experiment.) The guy is presumably thinking along the lines of, “whenever I get this character or character set, they seem to like it when I reply with this character or character set.” To the Chinese people on the outside, it seems like the guy in the prison cell understands the conversation but in reality he doesn’t. The prisoner recognizes the designs of the symbols, but not their meaning.

Searle’s allegation is that this is how computers play chess, compete on Jeopardy!, and generally “think.” They can trade in symbols, but they can’t understand the meaning behind the symbols.

Now, you can ruminate on Searle’s thought experiment and say, “Yep, looks like he’s got it. There’s something fundamentally different about the human thinking experience.” Or you can wonder, “do we humans really understand the meaning behind symbols in which we trade?”

Our intuitive response to that latter question is, “of course we understand meaning!” But, let’s “think” about this for a bit…

Here’s a basic thought: 2 + 2 = 4. Hard to deny that one, and it’s a thought we’ve all had. But when someone asks “what is 2 + 2?” do you contemplate the logic of the question, or do you just spit out the same answer you’ve always spat out? Do you think about the meaning of the question, or do you reflexively output an answer? Frankly, if you really examine your thought process, it’s difficult to say what happens, but generally speaking I have the sense of a kind of reflexive output. I’m certainly aware that the way I learned my addition, subtraction, multiplication and division tables was more rote memorization than contemplating the logic each equation.

When we get into more complex math problems, a more complex kind of thinking emerges. Let’s take, “what is 12 x 11?” My rote memory fails me. But I do recall that 11 is equal to 10 + 1. And rote memory does tell me that 12 X 10 = 120. So if I take that output, and add 12 (otherwise known as 12 X 1) I get 132. But, am I really thinking the process through, or am I merely collecting outputs from memorized calculations (e.g. 12 X 10 = 120 and 120 + 12 = 132)? Am I simply manipulating symbols? If I’m simply manipulating symbols, then even if I solve problems of great complexity, I’m still not “thinking” about them, because any massive problem can be broken down into a multitude of more basic problems.

There’s one way we definitely don’t think like computers. My understanding is that the way Deep Blue played chess was something like the following: It would look at the current chess board and “think” along the lines of, “if I move this piece here, he’ll move that piece there, and then I’ll have to move this piece there etc.” From these various generated scenarios, Deep Blue would choose the best one. Deep Blue could run through tons (Thousands? Millions?) of these scenarios very quickly, much faster than humans could. So how do humans handle complex games of chess (and other life challenges)? Our limited memory prevents us from searching through millions of possible scenarios. To a large degree, we use heuristics, simple rules that can be used to handle computations. A basic math heuristic might be that any number times 10 just gets a zero added to the right (e.g. 12 X 10 = 120.) A basic chess heuristic might be, “never expose your king.”

Jonah Lehrer’s book* “How We Decide,” describes a number of scenarios where unconscious heuristics were used in complex, often dangerous situations. I can’t recall all the details, but I remember one about a Navy officer in the first Iraq war who was able to correctly determine that a radar blip was an enemy missile, not a friendly airplane. The catch was that he couldn’t explain how he knew the right answer, he just did. The military studied the situation and eventually determined that the two types of airborne objects appeared on radar in slightly different ways, ways largely imperceptible to the conscious mind. This guy was simply operating on a gut feeling, which might as well be another word for heuristic.

* In fairness, I should note that many of Lehrer’s books including “How We Decide” have been found to contain fraudulent elements. But since I’m using the story as an example of the phenomenon of gut feelings (which have been recognized for centuries (er, I think)) I’m letting it stand.

However, these heuristics, like rote memorization, aren’t really thoughts; they’re more like reactions. We don’t process them, or least were not consciously aware of processing them, we just output them.

Frankly, as I “think” it through I’m not sure what a “real” thought would even be. Obviously we don’t want to think through and do calculations for every problem that comes up. It’s much easier to utilize these automated processes of memorization and heuristics. Maybe the real question is not, “can computers think like humans?” but, “are computers conscious of their thoughts?”

Which opens up the question, “what is consciousness?”

Sigh. Life is hard.

Leave a Reply

Your email address will not be published. Required fields are marked *