Archive for the 'Philosophy' Category

The golden rule blows!

I’ve mentioned in the past that while I agree with atheism I find the notion that you can have morality without religion to be, well, less obvious than many make it out to be. (I tackled this idea in detail here.)

A lot of secular humanists point to the golden rule as an easy source for morality. That rule is, of course, “do unto others as you would have them do unto you.” Over at Andrew Sullivan’s blog a reader makes the case.

…we also have deeply ethical atheists, agnostics, and secularists who debate the fine points of moral behavior with as much rigor and passion as theologists do, and who are building great ethical revolutions such as environmentalism on the surprisingly robust foundation of a practical, secular ethics.

Much of this success rests on the self-explanatory Golden Rule. No fear of damnation is needed to explain why it’s a good idea to treat others as you would like to be treated. It’s a contract, and you get security and stability only if you obey it. The obviousness of this contract also makes it a firm basis for moral innovation.

The problem is that while the golden rule might work some of the time, it really doesn’t work all of the time. The idea is that if I don’t want to be screwed, I shouldn’t screw others. But really you just don’t want others to know you’ve screwed them. If you can screw over other people without them knowing it, then you get all the benefits of the golden rule, plus a little extra for yourself. Also, the premise of the golden rule is that your security and safety will be harmed if you violate the golden rule. But what if I am strong enough that I cannot be harmed? Say I’m a king, or some kind of mafia boss? Then I can break the golden rule with at least some impunity and not fear for my security. As an incentive for morality, the golden rule does not work consistently and seems to have many caveats. Counter to the writer above, there are cases where one can get security and stability without obeying the golden rule.

There’s a third complaint I’d make which is the golden rule isn’t really moral in any kind of purist sense. According to the golden rule, you should treat others well not because you really want to but because you wanted to be treated well. It’s selfish. This may be acceptable, but I think the realization takes a bit of the wind out of the sails of people like the above person who righteously tout the golden rule as something almost holy.

Is religion the way to morality then? As I’ve said in the past, even it is flawed. The Christian argument is that one should be good to avoid burning in hell. Again, the is really a selfish argument: Do this to avoid pain (and lots of it!)

I do suspect morality evolved as a social practice that tended to work for most of those who engaged in it. Those who followed the golden rule flourished and were successful at passing on their genes etc. I presume it is, in some hard to imagine way, encoded into our genes. But morality and the golden rule are not really “logical” in any sense.

Our obsession with accomplishment

I continue to read Alan Harrington’s “The Immortalist.” One of the books argument is that man, faced with the modern observation that god is dead, tries to achieve immortality by becoming famous, thus ensuring that he (man, not god) will not be forgotten. We do this not consciously, of course; this drive for celebrity and status is buried somewhere in the nether-regions of the subconscious. This leads to a certain kind of craziness as Harrington notes in one paragraph:

Middle-class people in particular have always competed for the god’s notice, but today, with religious authority on the wane, this competition has become frantic, in some arenas unbearable so. We have a merciless obsession with accomplishment. Millions are caught up in the neurotic new faith that a human being must succeed or die. For such individuals it is not enough to enjoy life, or simply do a good job or be a good person. No, the main project, pushing all other concerns in the background, is to make a name that the gods will recognize.

I have to say this summarizes my internal battles explicitly. On one hand I derive pleasure by obtaining skills—musicianship, writing, drawing, speaking foreign languages, being a skilled lover etc.—but other the other I realize the fruitlessness of it all. These skill have little value in the job marketplace, they are only good for generating a certain kind of respect. But why earn respect? I suppose Harrington would argue because on some level I feel it will lead to some form of immortality. But if that is a false belief, as it almost certainly is, shouldn’t I just chill out and enjoy life?

He has an interesting phrase in there: “succeed or die.” It sounds very Darwinian. I would if this human obsession with skills and accomplishment became stronger after Darwin put forth his “survival of the fittest” theory?

Do we force the real world into being a “just world”?

First of all, I’m back in the saddle again so to speak. Was out of town for several weeks and neglected blogging.

While away I read most of a book I’ve been meaning to tackle: “Brainwashed – The Seductive Appeal of Mindless Neuroscience.” It’s a book coming from the “neuroskeptic” school—a viewpoint arguing that many of the claims neuroscience makes are inflated. It’s hard to argue with that basic point; you do see seemingly unlikely predictions coming out of neuroscience (and science) all the time. But that said, I find the book rather mushy. I just read through the chapter on free will and found it hard to follow the arguments. Sam Harris’s eBook called “Free Will” seems more cogently argued. (He argues against the existence of free will, the opposite view of “Brainwashed.”)

The free will chapter did have an interesting anecdote about Martin Lerner, a sociologist who developed the idea that people like to believe in a “just world hypothesis” (e.g. that the good are rewarded and the bad punished.) It seems a harmless enough delusion, but what if we alter our perception of the world to map it to a just world. And in doing so, what if we presume people who suffer deserve to suffer? The book states…

In one of his seminal experiments, Lerner asked subjects to observe a ten-minute video of a fellow student as she underwent a learning experiment involving memory. The student was strapped into an apparatus sprouting electrode leads and allegedly received a painful shock whenever she answered a question incorrectly (she was not receiving real shocks, of course, but believably feigned distress as if she were). Next, the researchers split the observers into groups. One was to vote on whether to remove the victim from the apparatus and reward her with money for correct answers. All but one voted to rescue her. The experimenters told another group of observers that the victim would continue to receive painful shocks; there was no option for compensation. When asked to evaluate the victim at this point, subjects in the victim-compensated condition rated her more favorably (e.g. more “attractive,” more “admirable”) than did subjects in the victim-uncompensated condition, in which the victim’s suffering was greater.)

I think it’s possible to extrapolate too much from these kinds of experiments, but this does kind of jibe with my sense of the world. We see someone suffering for whom we can do nothing and as a result we lower our opinion of them, basically saying, “sucks to be you!”

Ha! Humans are scum!

The death of individuality

I’m working on my next article for acid logic and it’s essentially a list of modern day fears that I think could be exploited by horror movies creators. One fear is fairly esoteric: a fear of the loss of identity brought about by the hyper-connectedness of the age. In essence, we are so hooked in to each other that when a subject comes up we immediately know what everyone else thinks about it and tailor our opinions and ideas to match the group we want to associate with. (Political tribes are an obvious example of these groups.)

The fear is not so much about this process but the crisis of self it could bring about. If you wake up one day and find that your opinions totally match some subset of the masses, would you start to wonder whether you really exist on a meaningful level? Would you conceive of yourself as merely a vessel for popular opinion?

Tackling the new atheists

In one of my recent articles on heretical ideas I noted that a) I’m an atheist, and b) I don’t think there is any way to divine morality. This puts me at odds with most of the “New Atheists” like Sam Harris and Richard Dawkins who argue we can have morality without God.

There’s an interesting article in the Spectator by a writer named Theo Hobson. He is presumably religious and takes a rather snippy tone to atheists. But I think he makes some points that complement my own and his piece is worth reading if you need something philosophical to curl up with.

In these paragraphs he points out something I’ve thought about. New atheists dismiss faith, but their insistence that some kind of moral rule can be ascertained sounds an awful lot like faith itself.

The trouble is that too many atheists simply assume the truth of secular humanism, that it is the axiomatic ideology: just there, our natural condition, once religious error is removed. They think morality just comes naturally. It bubbles up, it’s instinctive, not taught as part of a cultural tradition. In The God Delusion Richard Dawkins tries to strengthen this claim using his biological expertise, arguing that humans have evolved to be altruistic because it ultimately helps their genes to survive. But in the end, he admits that no firm case can be made concerning the evolutionary basis of morality. He’s just gesturing with his expertise, rather than really applying it to the issue at hand.

Here’s his muddle. On one hand he believes that morality, being natural, is a constant thing, stable throughout history. On the other hand, he believes in moral progress. To square the circle he plunges out of his depth, explaining that different ages have different ideas of morality, and that in recent times there has happily been a major advance in our moral conventions: above all, the principle of equality has triumphed. Such changes ‘certainly have not come from religion’, he snaps. He instead points to better education about our ‘common humanity with members of other races and with the other sex — both deeply unbiblical ideas that come from biological science, especially evolution’. But biological science, especially evolution, can be used to authorise eugenics and racism. The real issue is the triumph of an ideology of equality, of humanism. Instead of asking what this tradition is, and where it comes from, he treats it as axiomatic. This is just the natural human morality, he wants us to think, and in our times we are fortunate to see a particularly full expression of it.

It’s interesting that he argues that new atheists feel moral truth is “instinctive.” I tackled this very premise in my article.

Another New Atheist, Sam Harris, hints at something similar in this Big Think video when he says (after arguing that we don’t need God for morality) that we have “some very serviceable intuitions about what good and evil are.” The problem, however, is that feelings and intuitions (programmed into us via evolution or not) are not a logical means from which we can define moral behavior. Most of us would agree that proposing the murder of a 10 week old baby feels wrong, but that doesn’t mean it can be logically shown to be so. We can even construct scenarios where killing the baby is the right thing to do for the greater good (say, the baby is the carrier of a deadly disease that cannot be allowed to spread). In such cases, killing the baby might be the right thing to do (according to conventional ethics) but I think we all know that it would still feel awful to carry out the act. From that we must conclude that feelings/intuitions are not a trustworthy source of divining morals.

Breatharians

Today in my readings I came across mention of something I’d never heard of: breatharians. These are people who believe they can live without food by subsisting on air and sunlight. It sounds insane of course, but a google search reveals plenty of conversation about the topic. How do they do it? Well, for the most part they don’t.

In 1983, most of the leadership of the movement in California resigned when Wiley Brooks, a notable proponent of breatharianism, was caught sneaking into a hotel and ordering a chicken pie.[

Mmmm... chicken pie.

Also note:

Under controlled conditions where people are actually watched to see if they can do it without sneaking some real food, they fail. The name most commonly associated with breatharianism is Australia's Jasmuheen (born Ellen Greve), who claims to live only on a cup of tea and a biscuit every few days. However, the only supervised test of this claim (by the Australian edition of 60 Minutes in 1999) left her suffering from severe dehydration[4] and the trial was halted after four days, despite Greve’s insistence she was happy to continue. She claimed the failure was due to being near a city road, leading to her being forced to breathe “bad air”. She continued this excuse even after she was moved to the middle of nowhere.

The various forms of human insanity seem to have no limits.

Atheism versus spirituality?

I’ve been reading an interesting book on twin studies called “Identically Different.” It gets a reader up to up to date on the current analysis of what kinds of human behavior can be attributed to genes. The book is broken up into chapters such as “The Happiness Gene,” “The Talent Gene” etc. (I should make clear author is far from an absolutist who believes genes are the dominant force in our lives; he subscribes to the mainstream belief that our behavior is a combination between nature and nature.)

One chapter is “The God Gene.” It explores the idea that some part of our brain is wired to believe in God or at least something “greater”. The author is not the first to make this argument. (I’ve commented on similar material here.)

Part of how scientists study this sort of thing is by asking people to fill out self-surveys on their religiosity. And here I have a small beef with the process. The author describes two questions on a multi-question survey.

I believe that all life depends of some spiritual order or power that cannot be completely explained — true or false.

Often, when I look at an ordinary thing, something wonderful happens— I get the feeling I am seeing it fresh for the first time —- true or false.

If I’m interpreting this correctly, answering false to these questions would be a marker for atheism, and marking true would imply spirituality.

I would answer true for the first and true for the second. (I can’t really claim to be blown away in these moments of personal beauty, but, yeah, sometimes I am struck by the beauty of things.) But I don’t really see this as contradictory. Everything I’ve seen about the universal seems to imply a lack of God (in the conventional religious sense.) But I don’t think that means I can’t be spiritual in so much as enjoying the grandeur of universe. And I am willing to concede that there could be a certain kind of greater consciousness in the universe (as I described here.)

The questions set up a battle—atheism versus spirituality—that I don’t think is necessary. I will say, however, that while I come out in favor of spirituality here, that doesn’t mean I buy into the vast wastelands of idiocy that are often touted as spirituality—channeling aliens and all that rot.

The end of the creative class?

I’ve mentioned the book “The Age of Insight” which I found quite interesting. It was about many things including the exploits of various artists that lived in Vienna around the turn of the century (Gustav Klimt, Egon Schiele, etc.) One thing I recall from the book is how these men really struggled to come up with unique art. They experimented with different ideas and incorporated a lot of the discoveries of that era’s science into their art. (Klimpt incorporated images of blastocyst cells in his painting, for example. ) Their art was more than just pretty pictures—it had meat and substance*.

* Visual artists of the day were spurred on by two pressing challenges—1) the advent of photography that rendered realistic painting somewhat moot, and 2) the rise of the Freud and the idea that one’s “inner world” might be a more fascinating place than the outer world.

I think this trend lasted through the 20th century. Think of jazz musicians, existential filmmakers, Robert Crumb, psychedelic music etc. Whatever you think of this stuff (I, personally, find most psychedelic music laughable though I applaud the band Ultimate Spinach) it was art with a lot of thought behind it.

This was art made by what I would call the creative class. These were artists (of all disciplines) exploring the world, making art not for any obvious immediate use (like being used in a greeting card or as a portrait.)

I’m not sure you see much of this today. With the decimation of the value of content in the digital age, I’m not sure it’s viable to make art that doesn’t have immediate use.

There was a book that came out several years ago called “The Rise of the Creative Class.” I never read it but it would seem to dispute what I’m saying. But examine this blurb about the books (From the book’s Amazon page.)

He defines this class as those whose economic function is to create new ideas, new technology, and new creative content. In general this group shares common characteristics, such as creativity, individuality, diversity, and merit. The author estimates that this group has 38 million members, constitutes more than 30 percent of the U.S. workforce, and profoundly influences work and lifestyle issues. The purpose of this book is to examine how and why we value creativity more highly than ever and cultivate it more intensely.

This creative class is creating for a (usually commercial) reason. I absolutely agree that creating a web application can be a very creative pursuit—after all, I’ve been part of that process—but it’s different from explorational creating, from the act of creating to “find your voice.” We value creativity today if it has short term payoff, not so much if the benefits aren’t immediately obvious. Art now has to immediately find its value in the marketplace.

I should be clear, I realize there’s some crossover here. Artists making “for immediate use” art often spend their off hours making more esoteric art. But the general trend I find troubling.

Electromagnetic Consciousness

In the realm of brain studies there’s a fairly reductionist view that argues that our consciousness and subjective experience is firmly rooted in our physical brains. The idea goes that we have these incredibly complex interactions between tens of billions of neurons and out of that arises our experience of being alive. Most authors I’ve read on the topic freely concede the exact nature of how consciousness arises from this is a mystery but it seems pretty clear that our self corresponds to our neural tissue. Simply consider that someone can have a brain stroke and they become a different person — they can no longer speak or form memories or control their anger. The soul seems to exist in physical form (or more accurately, it doesn’t exist at all.)

I’m pretty sympathetic to this view. But the book “The Mind’s I” has a thought experiment that does challenge this view. First let’s consider a brain in its ideal form. It’s sitting there, neurons firing, creating thoughts. Now let’s imagine an incredible surgery where you go in and separate apart every single neuron and place each one in its own chemical bath to keep it alive. (This is, or course, impossible.) You then attach electronic signaling/receiving devices so each neuron can communicate with whatever neurons to which it was “attached” (e.g. shared a synapse with) before. So, basically, even though the neurons are now separate, their signaling is exactly the same as it was in the whole brain. Can we still envision a mind rising out of all this?

Well, I dunno… maybe…

But it gets worse. Instead of putting little signaling/receiving devices on each neuron, attach little zappers that that simply fire different amounts of electricity. Now separate these neurons by hundreds of miles. Then fire of each of the zappers so that the neurons fire the exact way the would if the brain’s owner was thinking of a cat. (There’s no signaling going on, just neurons firing in the same order as if they were receiving signals.) Would some entity somewhere suddenly think of a cat?

It seems unlikely doesn’t it? But the individual neurons in all these cases are behaving exactly the same. So this would seem to dispel the possibility of a purely reductionist (e.g. it’s all in the tissue) model of consciousness.

I just stumbled on some general theories that address this issue. They are “Electromagnetic theories of consciousness.” (Link goes to wiki page about it.) The idea is that when you have a bunch of neurons in a brain they are, because of their electrical activity, creating an electromagnetic field. And somehow this field is consciousness. The field is not only created by the brain’s neurons, it affects them as well, so the field and brain effectively pass signals back and forth. The wiki page has details.

The starting point for McFadden and Pockett’s theory is the fact that every time a neuron fires to generate an action potential, and a postsynaptic potential in the next neuron down the line, it also generates a disturbance in the surrounding electromagnetic field. McFadden has proposed that the brain’s electromagnetic field creates a representation of the information in the neurons. Studies undertaken towards the end of the 20th century are argued to have shown that conscious experience correlates not with the number of neurons firing, but with the synchrony of that firing.[9] McFadden views the brain’s electromagnetic field as arising from the induced EM field of neurons. The synchronous firing of neurons is, in this theory, argued to amplify the influence of the brain’s EM field fluctuations to a much greater extent than would be possible with the unsynchronized firing of neurons.
McFadden thinks that the EM field could influence the brain in a number of ways. Redistribution of ions could modulate neuronal activity, given that voltage-gated ion channels are a key element in the progress of axon spikes. Neuronal firing is argued to be sensitive to the variation of as little as one millivolt across the cell membrane, or the involvement of a single extra ion channel. Transcranial magnetic stimulation is similarly argued to have demonstrated that weak EM fields can influence brain activity.[citation needed]
McFadden proposes that the digital information from neurons is integrated to form a conscious electromagnetic information (cemi) field in the brain. Consciousness is suggested to be the component of this field that is transmitted back to neurons, and communicates its state externally. Thoughts are viewed as electromagnetic representations of neuronal information, and the experience of free will in our choice of actions is argued to be our subjective experience of the cemi field acting on our neurons.

I’m not agreeing with this (frankly, I still don’t really understand what electromagnetic fields are) but it does address the problems with the reductionist view.

Can metal remember?

I’ve mentioned in the past an idea that I freely concede may be crazy but it’s interesting enough to keep afloat: the possibility that consciousness is a basic property of all things and when matter interacts in complex and networked ways (like it does in a human brain or an advanced computer) higher, self-aware consciousness develops.

This L.A Times article offers more food for thought. It’s about a UCLA scientist who has developed a microchip that he claims can remember. How does it do it? Well, the science writing in this article is so vague that I’m pretty sure the reporter doesn’t understand and I sure don’t. It basically sounds like the scientist is creating neuron-like connections with metal strands and passing electricity through them.

In a lab, they placed a series of copper wire posts, mounted on a silicon wafer, into a solution of silver. As the copper dissolved, the silver formed intricate hair-like strands, as complex as the human cortex. It was the birth of the dust ball.

Building the chip is “extraordinarily simple,” Stieg says. Once the strands are created, they are exposed to sulfur, which provides electrical and ionic conductivity, and when electrical signals are sent through them, atoms migrate through each intersection of silver, each strand over strand.

Much as stimulus changes the brain by building over time synaptic patterns that can be associated to memory, the signals over time change the structure of the chip. Bridges form between the strands, further altering the chip.

What I find interesting here is the default behavior of the components of this chip seems to be to seek out “connections”, to create bridges. Much like neurons in the human brain branching and creating new synapses.

What causes neurons and silver strands to seek connections? I believe it is the fundamental need of all things to seek love!

No, seriously, I dunno. But if there’s a basic property of matter to, in some way, attract other matter, then it’s not that amazing that connections of the sort that could engender consciousness would arise.