Archive for the 'Philosophy' Category
January 3rd, 2014 by Wil
Less than a year ago I read the Eckhart Tolle book “A New Earth” and talked about it here. Now I’m reading what is considered his main text, “The Power of Now.”
Tolle’s main point—one that hardly originates with him—is that “egoic” thinking is the source of a lot of unhappiness. Egoic thinking is “I” thinking. For example…
“I am a millionaire and so I am awesome.”
“I have a beautiful cat therefore I rule.”
“I wrote a great piano sonata therefore I am the best.”
But it’s not just affirming statements, it could be…
“I have an IQ of 45 therefore I am stupid.” (Frankly, 45 is such a low IQ I doubt the idiot would even be able to form that thought.)
“I lost my wife to a better looking man therefore I am a loser.”
You get the drift. Who thinks this way? Pretty much everyone. Tolle argues this way of thinking is so built into our culture that most people are unaware that there even are other ways to think. Certainly I am guilty of this kind of thinking, though I am trying to do less of it.
I struck me today that there’s something sort of anti-progressive about Tolle’s argument. (By progressive I mean politically progressive: vegans, Mother Jones, Move On etc.) The progressive movement, at least its academic component, is very tied up in identity politics. “I am a gay, African American/Latino from a third generation middle class family” …that sort of thing. Borrowing from Marxism, progressivism is, well, frankly obsessed with defining people via classifications. Even though Tolle is associated with fringy, peace loving, new age types, I see a certain conflict between the two belief systems.
Frankly, plenty on the right are obsessed with individual classification too. “I am a God fearing conservative from Alabama” and what not. But you don’t get the sense the right is focussed on gender, race, class etc. to the degree the left is.
Oddly, this reminds me of my recent article on the 80s kung fu flick, “The Last Dragon.” I argued that Leroy, the African American hero of the film, essentially redefined himself as asian–he took on a new racial identity.
This kind of cultural switcheroo might just sound like a gag played for cheap laughs but I think it really is the “soul” of the film, arguing—just as your college sociology professor would—that race is a social construct, one we are free to dismiss when we find an identity more to our liking. Granted, the embrace of blackness by the Chinese trio seems a little phony—a desperate grab at hipsterdom—but Leroy’s comes across as real; even though he’s from Harlem, he finds a path and identity in the East.
Tolle would probably argue that Leroy should dispense with any racial identification (and as I think about it, maybe that is what he really does.) But the movie does address the impermanence of these kinds of egoic constructs.
I am awesome.
January 2nd, 2014 by Wil
As everyone knows I’ve spent quite a bit of time over the past four years reading about neuroscience and psychology. Occasionally I’ll see some comment made about how some buddhist monk in 2300 B.C or a Christian philosopher in 1200 A.D. made an observation that is now supported by science. I would often think, “Wow, that’s pretty impressive. Even though these guys didn’t have the advantages of the modern era—M.R.I.s and peer reviewed research etc.—they were able to get to some core truths about the nature of existence.”
I now wonder if I have this backwards. I’m presuming modern humans have the advantage and people in the past were disadvantaged. But, frankly, if you lived in 2300 B.C. and your day consisted of catching some fish and then staring at clouds for 6 hours, how could you not make knowledgeable observations about existence? And, if you live in our era with the endless onslaught of meaningless bullshit, how can you really have the time to simply exist?
I’m aware that not everyone in the past sat around staring at clouds all day – there were wars, pestilence, starvation etc. But some folks did, for decades perhaps. And they probably led richer lives (if you’ll allow me a value judgement) than we do now.
December 16th, 2013 by Wil
I’ve mentioned that I often find myself musing on an original thought only to find, after a month or so, that someone grabs attention by publishing the same idea, usually in some sort of “respected” journal or web site. A lesser man might be upset with the psychic theft of his ideas, but not me. I’m happy to provide my musings for the good of mankind.
Not long ago I was thinking about how we define the notion of life. For instance, we define a grasshopper as alive and a rock as not. But, the more you reduce living things to their tiny components, the more they appear similar to non-living things. All of us—living and dead—are made up of molecules which themselves are made up of atoms which can be broken down to quantum particles. If we are all made up of essentially the same stuff, why are some things alive and some dead?
You might say, “because living things move,” but of course so do remote controlled cars. And some non-living things don’t move for eons.
In a blog entitled “Why Life Does Not Really Exist” science writer Ferris Jabr takes this ball and runs with it, doing a much better job with the topic than I could. Ultimately here arrives here:
Why is defining life so frustratingly difficult? Why have scientists and philosophers failed for centuries to find a specific physical property or set of properties that clearly separates the living from the inanimate? Because such a property does not exist. Life is a concept that we invented. On the most fundamental level, all matter that exists is an arrangement of atoms and their constituent particles. These arrangements fall onto an immense spectrum of complexity, from a single hydrogen atom to something as intricate as a brain. In trying to define life, we have drawn a line at an arbitrary level of complexity and declared that everything above that border is alive and everything below it is not. In truth, this division does not exist outside the mind. There is no threshold at which a collection of atoms suddenly becomes alive, no categorical distinction between the living and inanimate, no Frankensteinian spark. We have failed to define life because there was never anything to define in the first place.
My sentiments exactly! But Jabr then fails to explore the dark questions this raises. Modern ethics and morality are all based on the assumption that life is something… a vital force, a soul, whatever. How do we then accommodate our moral concepts with the view that life is not real. Why is it wrong for me to roll a steamroller over a baby (e.g. a collection of molecules) but not a log (e.g. a collection of molecules)? These sorts of questions are, I think, going to be the difficult problems of the coming centuries.
You could accuse me of being willfully ignorant here. I don’t, of course, go through life equating people with rocks and logs. But I do ask why I don’t. Is the distinction an essentially meaningless (though, from an evolutionary perspective, useful) one built into the human mind? Or is there a real qualitative difference between the living and non-living?
December 14th, 2013 by Wil
Lately I’ve found myself noticing a phenomenon I’ve probably mentioned here in the past: the way ideas seems to leap up out of the netherworlds of my mind into my conscious brain. This happens a lot while waking up. Some particular issue is bothering me, perhaps something work related or a problem with a song or piece of writing I’m working on, and the solution suddenly appears. I find I don’t “build” ideas in a step by step manner, but rather that they “pop up” often fully formed.
In Jonah Lehrer’s book “How We Decide” he advocated the “sleep on it” method of problem solving. Struggling with a problem is often ineffective he argued. You are better off taking a walk or doing something to distract your conscious mind. Let your subconscious work on the solution.
I’m reading “The Mind’s I” and it makes an interesting point related to all this.
Our conscious thoughts seem to come bubbling up from the subterranean caverns of our mind, images flood into our mind’s eye without our having any idea where they came from! Yet when we publish them, we expect that we—not our subconscious structures—will get credit for our thoughts. This dichotomy of the creative self into a conscious part and an unconscious part is one of the most disturbing aspects of trying to understand the mind. If—as was just asserted—our best ideas come burbling up as if from mysterious underground springs, then who really are we? Where does the creative spirit reside? Is it by an act of will that we create, or are we just automata made out of biological hardware, from birth until death fooling ourselves through idle chatter into thinking that we have “free will”? If we are fooling ourselves about these matters, then whom—or what—are we fooling?
Do we “deserve” credit for our accomplishments and ideas?
December 8th, 2013 by Wil
I’ve been reading a book I’ve been interested in for some time: “The Mind’s I: Fantasies and reflections on self and soul.” The subject is probably obvious from the title. Within its pages I came across an very thought provoking line of inquiry.
First, let’s consider the “conventional” view of consciousness. The idea is that we have all these brain neurons – tens of billions of them – connected to each other by their arms and legs (or more correctly dendrites and axons.) They send signals to one another and somehow, out of this mass of connecting wires, arrives consciousness. This is largely the premise of the book “Connectome” which I discussed on these very pages.
Let’s apply a thought experiment. Suppose you take a person’s brain and tease apart all the neurons from each other. You put each neuron in its own nutrient bath (to keep it alive) and you fix some kind of radio transmitter on each of its inputs and outputs (e.g. arms and legs.) Each neuron can now pass signals to all its fellows as it did before, only now it’s using these transmitters. (This is technically impossible but go with me on this.) Is it reasonable to conclude that this brain is still conscious? Maybe, though something seems off.
Let’s get even crazier. Let’s say we observe that a particular experience – eating a cheese sandwich – causes the neurons to fire in a very precise order (as it almost certainly would.) Then, instead of placing radio transmitters on each neuron, we place little pulse devices that can zap each neuron just like a brain signal. At this point we should be able to activate (in this brain) the experience of eating a cheese sandwich just by zapping the neurons in exactly the same order (and same speed) they would be zapped during a “real” sandwich eating experience. But would our brain – a bunch of neurons lying in separate chemical baths, not even connected to each other but receiving zaps from pulse devices – be conscious? It seems hard to believe it would. What is binding these neurons together?
I can think of several possible conclusions from all this.
1) Dualists and spiritualists are right: there is an immaterial soul. Something that takes all that neural processing and in some way interprets it as a conscious experience. Of course, this is just taking one mystery – how does the brain work? – and replacing it with another – how does the soul work? (You could exchange the word “soul” with “mind” and make the same point.)
2) There’s some missing part to the connectome theory – some strange property that emerges out of complex systems like the brain, or dark matter, or weird laws of quantum physics, who knows what. This sort of thing is what I believe the quantum consciousness movement advocates.
3) Separate strands of unconnected brain tissue CAN be conscious. Maybe all sorts of weird complex systems can. Maybe clouds are conscious! Computers! The universe!
4) This one is hard to really put into words but I like it. We are presuming the brain has to correlate to this thing we call consciousness but we have never really defined consciousness. Maybe we aren’t really conscious at all? Maybe it’s just some kind of illusion? (But don’t we need to be conscious to be fooled by an illusion? Like I said, this one is tricky.)
October 21st, 2013 by Wil
A while back I was considering an idea for a fiction character. The conceit was that the character had multiple consciousnesses in their brain, but each consciousness generally arrived at the same decisions. So, if this person received an coffee from a waitress, one consciousness might think, “Wow, she sure brought the coffee fast, I better thank her,” while another consciousness might think, “Look at this whore. I bet she thinks by bringing me coffee quickly she’ll get a tip! Oh, well, I better thank her in the interests of conforming to society. Bleg.” In addition, neither consciousness was aware of the other.
I’m continuing to read Ray Kuzweil’s “How To Create a Mind” and he gets into some related territory, even allowing for the possibility of multiple consciousnesses. There is strong evidence for some version of this possibility. For example, we have Mike Gazzaniga’s split brain patients. I described these patients in an earlier post.
These are people, usually epileptic, who’ve had the series of neural fibers that connect their left and right brain hemispheres separated (for therapeutic reasons.) Gazzaniga came to find that in subtle ways these people are really of two minds. The right hemisphere is very literal and has no language function. The left hemisphere is the interpreter (e.g. it can construct stories and explanations – often incorrectly – from observed events), and has rich language functionality.
His exact experiments have been described many times and I see no reason to repeat them here. (This Nature article covers the gist.) But what’s observed is that the two separate sections of brain really seem to process the world separately and be unaware of each other.
We can also consider half brain patients. These are people, as you might suspect, have half a brain (either because of a birth defect or a surgical procedure.) The term is often considered derogatory, but in fact patients with half a brain function quite well. But they beg the question, what is that missing half a brain (present in most people) doing?
And of course, we can consider the unconscious – the part of your brain regulating your heart, causing your legs to walk and, possibly, repressing your sexual attraction to your dog, your primal hatred of all life etc. (Sometimes these two types of unconscious are broken up into the terms “unconscious” and “subconscious.”) Is the unconscious in some way conscious, but cordoned off from what we consider our conscious experience to be?
In a sense, I’m alleging that there’s more than one “I” in our skull. There’s our standard consciousness, which has rich language functionality and subjective experience, though maybe even that can be split into separate “Is” as the split brain experiments show. Then there’s the unconscious—probably the domain of our fear and pleasure driven reptilian brain—which has no language functionality. And maybe the unconscious can also be split into multiple components, each of them independently (in some sense) conscious.
You of course might say, “But I only feel that main, traditional consciousness—the one that gets up for work in the morning and and watches late night television at night?” Correct – “you” do, but I’m saying there are many “yous” in your body. It might help to think of those classic horror films where a person has an evil conjoined twin growing out of their body. That twin has no real say about what you (the main consciousness) decides to do, but he sits there, growing out of your chest and stewing in his anger*.
* Why do I presume these unconscious units are filled with hatred and anger as opposed to affable joy? It’s just the way I see the world, I guess.
Of course I’m really saying, maybe that evil twin (the un/subconscious) does have some effect on your decisions. He’s the classic Freudian subconscious nag, spurring you to chose a wife who reminds you of your mother, or to fear the boss who reminds you of the uncle who molested you before you had memory.
The question is whether these additional consciousnesses are really conscious the way “we” are. I suspect no; their consciousness is more like that of a dog or a bug. And we – the top consciousness – are left hearing the cries and pleas of these tiny consciousnesses and using them to guide our path through life.
UPDATE: Another neurosis implying some possible separate consciousness in one body: Alien Hand Syndrome! After a stroke…
The patient complained of a feeling of “strangeness” in relationship to the goal-directed movements of the left hand and insisted that “someone else” was moving the left hand, and that she was not moving it herself. Goldstein reported that, as a result of this report, “she was regarded at first as a paranoiac.” When the left hand grasped an object, she could not voluntarily release it. The somatic sensibility of the left side was reported to be impaired, especially with aspects of sensation having to do with the orienting of the limb. Some spontaneous movements were noted to occur involving the left hand, such as wiping the face or rubbing the eyes; but these were relatively infrequent. Only with significant effort was she able to perform simple movements with the left arm in response to spoken command, but these movements were performed slowly and often incompletely even if these same movements had been involuntarily performed with relative ease before while in the abnormal ‘alien’ control mode.
October 9th, 2013 by Wil
An interesting NY Times article argues that canine neurological function is – at least in some ways – similar to our own.
Although we are just beginning to answer basic questions about the canine brain, we cannot ignore the striking similarity between dogs and humans in both the structure and function of a key brain region: the caudate nucleus.
Specific parts of the caudate stand out for their consistent activation to many things that humans enjoy. Caudate activation is so consistent that under the right circumstances, it can predict our preferences for food, music and even beauty.
In dogs, we found that activity in the caudate increased in response to hand signals indicating food. The caudate also activated to the smells of familiar humans. And in preliminary tests, it activated to the return of an owner who had momentarily stepped out of view. Do these findings prove that dogs love us? Not quite. But many of the same things that activate the human caudate, which are associated with positive emotions, also activate the dog caudate. Neuroscientists call this a functional homology, and it may be an indication of canine emotions.
It is a bit of a stretch to conclude that because dog brain components activate in a way similar to ours then they must experience life in the same manner as we do. But it is a step towards that conclusion. And if science does determine that dogs (and likely other animals of similar sentience) feel emotions as humans do, then mankind is going to have to breath in a collective gasp at how we’ve often treated dogs throughout history.
The piece reminds me of an article I once wrote on the topic of morality. It was entitled, “You Think You’re a Good Person? You’re Not!” At one point I said:
By studying the past, and gaining a sense of the evolution of morality, perhaps we can intuit where it is headed. I’ve long felt that there will be a wide expansion of animal-rights in the coming centuries. As animals are revealed to be more and more intelligent and emotive, and as the possibility of “growing meat” becomes reality, there will be increased pressure on the meat industry to soften its ways, or even dissolve completely. (The Spanish government is even currently debating vastly increased legal protections for gorillas.) And some scientists are already arguing that plants have an emotional life, so plant rights may not be far behind. Of course many a science fiction author has painted futuristic scenarios where pieces of technology — computers and robots — demand protection under the law. And in this future era, they will look back at citizens of our age — meat eating, gardening, robot abusing bastards — and be shocked at our cruelty much the same way we are appalled at the behavior of slave owning aristocrats of the 1800s.
September 18th, 2013 by Wil
For years I’ve suffered from a problem my Dad has mentioned struggling with. I wake up in the morning and in that half asleep state worry about all my problems. This doesn’t happen every morning, but occasionally. Lately, when I find myself doing this I try to refocus my brain on something positive or at least neutral. I often think of animals because I like them. For instance this morning I found myself visualizing an owl. He had that strange expression owls often wear and I could see him flying around in a forest.
It struck me that maybe this process is similar to the whole Indian Totem animal thing. My understanding is that Indians would meditate (perhaps after consuming peyote or something) on a particular animal and, in some sense, become that animal. I don’t think anything mystical is going on there but I could see the whole process providing a focused sense of energy.
Cats are also good subjects to focus on.
August 29th, 2013 by Wil
Educated readers doubtless recall my old Acid Logic article “What is Morality?” in which I argued that our sense of morality is less a thought out, reasoned set of rules and more an ethereal sense that is actually physically felt in our body. We avoid doing bad not because we are intellectually opposed to it, but because contemplating bad acts makes us feel uncomfortable.
As mentioned, I’ve been reading Mike Gazzaniga’s book on free will, “Who’s In Charge?”, and he discusses some observations relevant to the morality issue. Gazzaniga is most famous for studying “split brain patients.” These are people, usually epileptic, who’ve had the series of neural fibers that connect their left and right brain hemispheres separated (for therapeutic reasons.) Gazzaniga came to find that in subtle ways these people are really of two minds. The right hemisphere is very literal and has no language function. The left hemisphere is the interpreter (e.g. it can construct stories and explanations – often incorrectly – from observed events), and has rich language functionality.
In the book, Gazzaniga notes the work of another neuroscientist who discovered that when we use our knowledge of other people’s beliefs and intent, we use a particular brain area in the right hemisphere. Gazzaniga was surprised by this because he presumed this would mean that the left brain in split brain patients (the talky brain) would be incapable of keeping track of people’s intentions. He designed a series of experiments to suss this out. Basically this involved asking patients questions like, “If Susie gives what she thinks is sugar but actually is poison to her boss, is she bad?” or the inverse, “If Susie gives her boss sugar that she thinks is poison, is she ok?” These questions, as you can see, are all about Susie’s intent. And, as Gazzaniga’s predicted, the split brain patients (or at least their talking left side) focused on the outcome of the actions, not the intent. It didn’t matter that Susie was trying to kill her boss if it all worked out okay.
It would seem that morality is a series of brain functions. If a piece is missing (or inaccessible), our moral function gets warped, at least by the standards of society.
May 29th, 2013 by Wil
The June 2013 Discover Magazine has an interesting article about advances being made in computer intelligence. The article is, unfortunately, not available online, but this quote caught my eye.
Cognitive computers… will weave together inputs from multiple sensory streams, form associations, encode memories, recognize patterns, make predictions and then interpret, perhaps even act – all using far less power than today’s machine.
Now is as good a time as any to ask a question that has floated around in science-fiction philosophy circles for a long time: will computers ever think like us? Will computers, at least in their thought processes, become human?
Even in their current state, computers are quite “intelligent.” You might’ve heard of Deep Blue, the computer program that bested chess great Garry Kasparov. And don’t forget Watson, the computer that won on Jeopardy! Computers are engaging in processes that at least mimic information recall and strategizing.
The philosopher John Searle has come up with an interesting thought experiment to illustrate why computers can never really think as humans do. He proposes that you have a man locked in a Chinese prison cell. The man does not speak or read Chinese. Chinese characters are passed into his cell, and he draws from his own collection of Chinese characters to “answer.” He eventually gets pretty good at responding with the correct Chinese characters. (Theoretically this would take many lifetimes to learn but this is a thought experiment.) The guy is presumably thinking along the lines of, “whenever I get this character or character set, they seem to like it when I reply with this character or character set.” To the Chinese people on the outside, it seems like the guy in the prison cell understands the conversation but in reality he doesn’t. The prisoner recognizes the designs of the symbols, but not their meaning.
Searle’s allegation is that this is how computers play chess, compete on Jeopardy!, and generally “think.” They can trade in symbols, but they can’t understand the meaning behind the symbols.
Now, you can ruminate on Searle’s thought experiment and say, “Yep, looks like he’s got it. There’s something fundamentally different about the human thinking experience.” Or you can wonder, “do we humans really understand the meaning behind symbols in which we trade?”
Our intuitive response to that latter question is, “of course we understand meaning!” But, let’s “think” about this for a bit…
Here’s a basic thought: 2 + 2 = 4. Hard to deny that one, and it’s a thought we’ve all had. But when someone asks “what is 2 + 2?” do you contemplate the logic of the question, or do you just spit out the same answer you’ve always spat out? Do you think about the meaning of the question, or do you reflexively output an answer? Frankly, if you really examine your thought process, it’s difficult to say what happens, but generally speaking I have the sense of a kind of reflexive output. I’m certainly aware that the way I learned my addition, subtraction, multiplication and division tables was more rote memorization than contemplating the logic each equation.
When we get into more complex math problems, a more complex kind of thinking emerges. Let’s take, “what is 12 x 11?” My rote memory fails me. But I do recall that 11 is equal to 10 + 1. And rote memory does tell me that 12 X 10 = 120. So if I take that output, and add 12 (otherwise known as 12 X 1) I get 132. But, am I really thinking the process through, or am I merely collecting outputs from memorized calculations (e.g. 12 X 10 = 120 and 120 + 12 = 132)? Am I simply manipulating symbols? If I’m simply manipulating symbols, then even if I solve problems of great complexity, I’m still not “thinking” about them, because any massive problem can be broken down into a multitude of more basic problems.
There’s one way we definitely don’t think like computers. My understanding is that the way Deep Blue played chess was something like the following: It would look at the current chess board and “think” along the lines of, “if I move this piece here, he’ll move that piece there, and then I’ll have to move this piece there etc.” From these various generated scenarios, Deep Blue would choose the best one. Deep Blue could run through tons (Thousands? Millions?) of these scenarios very quickly, much faster than humans could. So how do humans handle complex games of chess (and other life challenges)? Our limited memory prevents us from searching through millions of possible scenarios. To a large degree, we use heuristics, simple rules that can be used to handle computations. A basic math heuristic might be that any number times 10 just gets a zero added to the right (e.g. 12 X 10 = 120.) A basic chess heuristic might be, “never expose your king.”
Jonah Lehrer’s book* “How We Decide,” describes a number of scenarios where unconscious heuristics were used in complex, often dangerous situations. I can’t recall all the details, but I remember one about a Navy officer in the first Iraq war who was able to correctly determine that a radar blip was an enemy missile, not a friendly airplane. The catch was that he couldn’t explain how he knew the right answer, he just did. The military studied the situation and eventually determined that the two types of airborne objects appeared on radar in slightly different ways, ways largely imperceptible to the conscious mind. This guy was simply operating on a gut feeling, which might as well be another word for heuristic.
* In fairness, I should note that many of Lehrer’s books including “How We Decide” have been found to contain fraudulent elements. But since I’m using the story as an example of the phenomenon of gut feelings (which have been recognized for centuries (er, I think)) I’m letting it stand.
However, these heuristics, like rote memorization, aren’t really thoughts; they’re more like reactions. We don’t process them, or least were not consciously aware of processing them, we just output them.
Frankly, as I “think” it through I’m not sure what a “real” thought would even be. Obviously we don’t want to think through and do calculations for every problem that comes up. It’s much easier to utilize these automated processes of memorization and heuristics. Maybe the real question is not, “can computers think like humans?” but, “are computers conscious of their thoughts?”
Which opens up the question, “what is consciousness?”
Sigh. Life is hard.