Archive for the 'Philosophy' Category
August 26th, 2014 by Wil
Science writer Nicholas Wade recently wrote a book about the role of race in the development of human culture. According to his thesis, the different races possess more or less of certain collections of genes and some of these genes are responsible for human behavior therefore certain races are genetically predisposed towards certain behaviors*. This is controversial because it implies that in some sense, races can’t change and efforts to help them do so may be doomed to failure.
* Summarizing what Wade is saying is close to impossible and I’m sure people could niggle with what I’m saying here but I feel it’s close enough for this discussion.
Many people disagree with Wade’s theory and one frequent rebuttal is that race itself doesn’t exist—it is, they frequently say, a “social construct.” By this they mean, the division of race has no real meaning in nature. For example, the term species divides animal groups who can’t reproduce with each other. In that sense, species is a real term. But race is much harder to define. Different races (called sub-species by people who debate this stuff) can have sex with each. One might point out the differences of skin color and appearance between different races but that gets messy quickly. There are plenty of light skinned blacks or Asian looking Caucasians etc.
In this sense, I tend to agree that race is a social construct. But, as you think about it, so in pretty much everything. Words have meaning because enough of us got together and agreed they have meaning. If we didn’t all agree that a cup was a cup and that its purpose was to hold things to drink, it wouldn’t be a cup. If everyone on earth died then cups would no longer exist. They might exist in the sense that their matter would still exist (assuming the earth wasn’t destroyed or what have you) but as an object—a category—cups would be extinct. The definition of cups is a man made distinction which has no objective meaning.
(Of course, definitions are kind of blurry. Some people might look at a tall cup and claim it’s a flower vase. And we also hear about weird German words that have no translation in English.)
This reminds me of a few tidbits I’ve read in relation to Buddhist thought. There is a notion there that you can experience an object before you apply all the man-made definitions and correlations related to it. I suppose we all do this for a nanosecond before we mentally identify an object. For the briefest of moment, before you identify a cup, you experience it as some undefined thing. (This moment is so fast it’s questionable whether you can say we “experience it” but there you have it.)
Watching my dad and his wife, both in their 90s, I see a certain breakdown of this system of categories, this taxonomy, that we apply to everything around us. They might be baffled by what a fairly basic object is, or they might understand it but mislabel it; there’s a lot of calling things with words that rhyme with the real name—cup could become pup for example. I suppose this is what life was like when we were babies—everything was just a thing, and often we probably couldn’t even differentiate between things. A newspaper next to an apple next to a kitten was just a pile of “stuff” in our new minds.
This leads to an interesting point. What babies and demented people have in common are essentially brains that don’t categorize well. The neurons of their brains have limited connections (either because the connections haven’t formed yet as in the case of babies, or because they have deteriorated as in the case of older adults.) This would imply that the meaning we apply to the objects we encounter is literally wired into our brains. It’s the structure of our brains that applies meaning. From this one can presume that a brain structured differently would find different meanings in the world. (Say the brain of an autistic child. Or an alien. Or a sentient computer.)
It really leads to the question of “what is real.” Our words are not real. Our categories are not real. The only thing really real that I can see is the physical matter of the universe. Even the distinctions between these bits of matter (e.g. molecules, atoms, electrons, quarks etc.) are not really real.
This is heavy shit to think about. It’s giving me a headache.
July 22nd, 2014 by Wil
I’ve mentioned in the past that while I agree with atheism I find the notion that you can have morality without religion to be, well, less obvious than many make it out to be. (I tackled this idea in detail here.)
A lot of secular humanists point to the golden rule as an easy source for morality. That rule is, of course, “do unto others as you would have them do unto you.” Over at Andrew Sullivan’s blog a reader makes the case.
…we also have deeply ethical atheists, agnostics, and secularists who debate the fine points of moral behavior with as much rigor and passion as theologists do, and who are building great ethical revolutions such as environmentalism on the surprisingly robust foundation of a practical, secular ethics.
Much of this success rests on the self-explanatory Golden Rule. No fear of damnation is needed to explain why it’s a good idea to treat others as you would like to be treated. It’s a contract, and you get security and stability only if you obey it. The obviousness of this contract also makes it a firm basis for moral innovation.
The problem is that while the golden rule might work some of the time, it really doesn’t work all of the time. The idea is that if I don’t want to be screwed, I shouldn’t screw others. But really you just don’t want others to know you’ve screwed them. If you can screw over other people without them knowing it, then you get all the benefits of the golden rule, plus a little extra for yourself. Also, the premise of the golden rule is that your security and safety will be harmed if you violate the golden rule. But what if I am strong enough that I cannot be harmed? Say I’m a king, or some kind of mafia boss? Then I can break the golden rule with at least some impunity and not fear for my security. As an incentive for morality, the golden rule does not work consistently and seems to have many caveats. Counter to the writer above, there are cases where one can get security and stability without obeying the golden rule.
There’s a third complaint I’d make which is the golden rule isn’t really moral in any kind of purist sense. According to the golden rule, you should treat others well not because you really want to but because you wanted to be treated well. It’s selfish. This may be acceptable, but I think the realization takes a bit of the wind out of the sails of people like the above person who righteously tout the golden rule as something almost holy.
Is religion the way to morality then? As I’ve said in the past, even it is flawed. The Christian argument is that one should be good to avoid burning in hell. Again, the is really a selfish argument: Do this to avoid pain (and lots of it!)
I do suspect morality evolved as a social practice that tended to work for most of those who engaged in it. Those who followed the golden rule flourished and were successful at passing on their genes etc. I presume it is, in some hard to imagine way, encoded into our genes. But morality and the golden rule are not really “logical” in any sense.
June 26th, 2014 by Wil
I continue to read Alan Harrington’s “The Immortalist.” One of the books argument is that man, faced with the modern observation that god is dead, tries to achieve immortality by becoming famous, thus ensuring that he (man, not god) will not be forgotten. We do this not consciously, of course; this drive for celebrity and status is buried somewhere in the nether-regions of the subconscious. This leads to a certain kind of craziness as Harrington notes in one paragraph:
Middle-class people in particular have always competed for the god’s notice, but today, with religious authority on the wane, this competition has become frantic, in some arenas unbearable so. We have a merciless obsession with accomplishment. Millions are caught up in the neurotic new faith that a human being must succeed or die. For such individuals it is not enough to enjoy life, or simply do a good job or be a good person. No, the main project, pushing all other concerns in the background, is to make a name that the gods will recognize.
I have to say this summarizes my internal battles explicitly. On one hand I derive pleasure by obtaining skills—musicianship, writing, drawing, speaking foreign languages, being a skilled lover etc.—but other the other I realize the fruitlessness of it all. These skill have little value in the job marketplace, they are only good for generating a certain kind of respect. But why earn respect? I suppose Harrington would argue because on some level I feel it will lead to some form of immortality. But if that is a false belief, as it almost certainly is, shouldn’t I just chill out and enjoy life?
He has an interesting phrase in there: “succeed or die.” It sounds very Darwinian. I would if this human obsession with skills and accomplishment became stronger after Darwin put forth his “survival of the fittest” theory?
June 16th, 2014 by Wil
First of all, I’m back in the saddle again so to speak. Was out of town for several weeks and neglected blogging.
While away I read most of a book I’ve been meaning to tackle: “Brainwashed – The Seductive Appeal of Mindless Neuroscience.” It’s a book coming from the “neuroskeptic” school—a viewpoint arguing that many of the claims neuroscience makes are inflated. It’s hard to argue with that basic point; you do see seemingly unlikely predictions coming out of neuroscience (and science) all the time. But that said, I find the book rather mushy. I just read through the chapter on free will and found it hard to follow the arguments. Sam Harris’s eBook called “Free Will” seems more cogently argued. (He argues against the existence of free will, the opposite view of “Brainwashed.”)
The free will chapter did have an interesting anecdote about Martin Lerner, a sociologist who developed the idea that people like to believe in a “just world hypothesis” (e.g. that the good are rewarded and the bad punished.) It seems a harmless enough delusion, but what if we alter our perception of the world to map it to a just world. And in doing so, what if we presume people who suffer deserve to suffer? The book states…
In one of his seminal experiments, Lerner asked subjects to observe a ten-minute video of a fellow student as she underwent a learning experiment involving memory. The student was strapped into an apparatus sprouting electrode leads and allegedly received a painful shock whenever she answered a question incorrectly (she was not receiving real shocks, of course, but believably feigned distress as if she were). Next, the researchers split the observers into groups. One was to vote on whether to remove the victim from the apparatus and reward her with money for correct answers. All but one voted to rescue her. The experimenters told another group of observers that the victim would continue to receive painful shocks; there was no option for compensation. When asked to evaluate the victim at this point, subjects in the victim-compensated condition rated her more favorably (e.g. more “attractive,” more “admirable”) than did subjects in the victim-uncompensated condition, in which the victim’s suffering was greater.)
I think it’s possible to extrapolate too much from these kinds of experiments, but this does kind of jibe with my sense of the world. We see someone suffering for whom we can do nothing and as a result we lower our opinion of them, basically saying, “sucks to be you!”
Ha! Humans are scum!
April 28th, 2014 by Wil
I’m working on my next article for acid logic and it’s essentially a list of modern day fears that I think could be exploited by horror movies creators. One fear is fairly esoteric: a fear of the loss of identity brought about by the hyper-connectedness of the age. In essence, we are so hooked in to each other that when a subject comes up we immediately know what everyone else thinks about it and tailor our opinions and ideas to match the group we want to associate with. (Political tribes are an obvious example of these groups.)
The fear is not so much about this process but the crisis of self it could bring about. If you wake up one day and find that your opinions totally match some subset of the masses, would you start to wonder whether you really exist on a meaningful level? Would you conceive of yourself as merely a vessel for popular opinion?
April 25th, 2014 by Wil
In one of my recent articles on heretical ideas I noted that a) I’m an atheist, and b) I don’t think there is any way to divine morality. This puts me at odds with most of the “New Atheists” like Sam Harris and Richard Dawkins who argue we can have morality without God.
There’s an interesting article in the Spectator by a writer named Theo Hobson. He is presumably religious and takes a rather snippy tone to atheists. But I think he makes some points that complement my own and his piece is worth reading if you need something philosophical to curl up with.
In these paragraphs he points out something I’ve thought about. New atheists dismiss faith, but their insistence that some kind of moral rule can be ascertained sounds an awful lot like faith itself.
The trouble is that too many atheists simply assume the truth of secular humanism, that it is the axiomatic ideology: just there, our natural condition, once religious error is removed. They think morality just comes naturally. It bubbles up, it’s instinctive, not taught as part of a cultural tradition. In The God Delusion Richard Dawkins tries to strengthen this claim using his biological expertise, arguing that humans have evolved to be altruistic because it ultimately helps their genes to survive. But in the end, he admits that no firm case can be made concerning the evolutionary basis of morality. He’s just gesturing with his expertise, rather than really applying it to the issue at hand.
Here’s his muddle. On one hand he believes that morality, being natural, is a constant thing, stable throughout history. On the other hand, he believes in moral progress. To square the circle he plunges out of his depth, explaining that different ages have different ideas of morality, and that in recent times there has happily been a major advance in our moral conventions: above all, the principle of equality has triumphed. Such changes ‘certainly have not come from religion’, he snaps. He instead points to better education about our ‘common humanity with members of other races and with the other sex — both deeply unbiblical ideas that come from biological science, especially evolution’. But biological science, especially evolution, can be used to authorise eugenics and racism. The real issue is the triumph of an ideology of equality, of humanism. Instead of asking what this tradition is, and where it comes from, he treats it as axiomatic. This is just the natural human morality, he wants us to think, and in our times we are fortunate to see a particularly full expression of it.
It’s interesting that he argues that new atheists feel moral truth is “instinctive.” I tackled this very premise in my article.
Another New Atheist, Sam Harris, hints at something similar in this Big Think video when he says (after arguing that we don’t need God for morality) that we have “some very serviceable intuitions about what good and evil are.” The problem, however, is that feelings and intuitions (programmed into us via evolution or not) are not a logical means from which we can define moral behavior. Most of us would agree that proposing the murder of a 10 week old baby feels wrong, but that doesn’t mean it can be logically shown to be so. We can even construct scenarios where killing the baby is the right thing to do for the greater good (say, the baby is the carrier of a deadly disease that cannot be allowed to spread). In such cases, killing the baby might be the right thing to do (according to conventional ethics) but I think we all know that it would still feel awful to carry out the act. From that we must conclude that feelings/intuitions are not a trustworthy source of divining morals.
April 10th, 2014 by Wil
Today in my readings I came across mention of something I’d never heard of: breatharians. These are people who believe they can live without food by subsisting on air and sunlight. It sounds insane of course, but a google search reveals plenty of conversation about the topic. How do they do it? Well, for the most part they don’t.
In 1983, most of the leadership of the movement in California resigned when Wiley Brooks, a notable proponent of breatharianism, was caught sneaking into a hotel and ordering a chicken pie.[
Mmmm… chicken pie.
Under controlled conditions where people are actually watched to see if they can do it without sneaking some real food, they fail. The name most commonly associated with breatharianism is Australia’s Jasmuheen (born Ellen Greve), who claims to live only on a cup of tea and a biscuit every few days. However, the only supervised test of this claim (by the Australian edition of 60 Minutes in 1999) left her suffering from severe dehydration and the trial was halted after four days, despite Greve’s insistence she was happy to continue. She claimed the failure was due to being near a city road, leading to her being forced to breathe “bad air”. She continued this excuse even after she was moved to the middle of nowhere.
The various forms of human insanity seem to have no limits.
April 5th, 2014 by Wil
I’ve been reading an interesting book on twin studies called “Identically Different.” It gets a reader up to up to date on the current analysis of what kinds of human behavior can be attributed to genes. The book is broken up into chapters such as “The Happiness Gene,” “The Talent Gene” etc. (I should make clear author is far from an absolutist who believes genes are the dominant force in our lives; he subscribes to the mainstream belief that our behavior is a combination between nature and nature.)
One chapter is “The God Gene.” It explores the idea that some part of our brain is wired to believe in God or at least something “greater”. The author is not the first to make this argument. (I’ve commented on similar material here.)
Part of how scientists study this sort of thing is by asking people to fill out self-surveys on their religiosity. And here I have a small beef with the process. The author describes two questions on a multi-question survey.
I believe that all life depends of some spiritual order or power that cannot be completely explained — true or false.
Often, when I look at an ordinary thing, something wonderful happens— I get the feeling I am seeing it fresh for the first time —- true or false.
If I’m interpreting this correctly, answering false to these questions would be a marker for atheism, and marking true would imply spirituality.
I would answer true for the first and true for the second. (I can’t really claim to be blown away in these moments of personal beauty, but, yeah, sometimes I am struck by the beauty of things.) But I don’t really see this as contradictory. Everything I’ve seen about the universal seems to imply a lack of God (in the conventional religious sense.) But I don’t think that means I can’t be spiritual in so much as enjoying the grandeur of universe. And I am willing to concede that there could be a certain kind of greater consciousness in the universe (as I described here.)
The questions set up a battle—atheism versus spirituality—that I don’t think is necessary. I will say, however, that while I come out in favor of spirituality here, that doesn’t mean I buy into the vast wastelands of idiocy that are often touted as spirituality—channeling aliens and all that rot.
March 12th, 2014 by Wil
I’ve mentioned the book “The Age of Insight” which I found quite interesting. It was about many things including the exploits of various artists that lived in Vienna around the turn of the century (Gustav Klimt, Egon Schiele, etc.) One thing I recall from the book is how these men really struggled to come up with unique art. They experimented with different ideas and incorporated a lot of the discoveries of that era’s science into their art. (Klimpt incorporated images of blastocyst cells in his painting, for example. ) Their art was more than just pretty pictures—it had meat and substance*.
* Visual artists of the day were spurred on by two pressing challenges—1) the advent of photography that rendered realistic painting somewhat moot, and 2) the rise of the Freud and the idea that one’s “inner world” might be a more fascinating place than the outer world.
I think this trend lasted through the 20th century. Think of jazz musicians, existential filmmakers, Robert Crumb, psychedelic music etc. Whatever you think of this stuff (I, personally, find most psychedelic music laughable though I applaud the band Ultimate Spinach) it was art with a lot of thought behind it.
This was art made by what I would call the creative class. These were artists (of all disciplines) exploring the world, making art not for any obvious immediate use (like being used in a greeting card or as a portrait.)
I’m not sure you see much of this today. With the decimation of the value of content in the digital age, I’m not sure it’s viable to make art that doesn’t have immediate use.
There was a book that came out several years ago called “The Rise of the Creative Class.” I never read it but it would seem to dispute what I’m saying. But examine this blurb about the books (From the book’s Amazon page.)
He defines this class as those whose economic function is to create new ideas, new technology, and new creative content. In general this group shares common characteristics, such as creativity, individuality, diversity, and merit. The author estimates that this group has 38 million members, constitutes more than 30 percent of the U.S. workforce, and profoundly influences work and lifestyle issues. The purpose of this book is to examine how and why we value creativity more highly than ever and cultivate it more intensely.
This creative class is creating for a (usually commercial) reason. I absolutely agree that creating a web application can be a very creative pursuit—after all, I’ve been part of that process—but it’s different from explorational creating, from the act of creating to “find your voice.” We value creativity today if it has short term payoff, not so much if the benefits aren’t immediately obvious. Art now has to immediately find its value in the marketplace.
I should be clear, I realize there’s some crossover here. Artists making “for immediate use” art often spend their off hours making more esoteric art. But the general trend I find troubling.
February 26th, 2014 by Wil
In the realm of brain studies there’s a fairly reductionist view that argues that our consciousness and subjective experience is firmly rooted in our physical brains. The idea goes that we have these incredibly complex interactions between tens of billions of neurons and out of that arises our experience of being alive. Most authors I’ve read on the topic freely concede the exact nature of how consciousness arises from this is a mystery but it seems pretty clear that our self corresponds to our neural tissue. Simply consider that someone can have a brain stroke and they become a different person — they can no longer speak or form memories or control their anger. The soul seems to exist in physical form (or more accurately, it doesn’t exist at all.)
I’m pretty sympathetic to this view. But the book “The Mind’s I” has a thought experiment that does challenge this view. First let’s consider a brain in its ideal form. It’s sitting there, neurons firing, creating thoughts. Now let’s imagine an incredible surgery where you go in and separate apart every single neuron and place each one in its own chemical bath to keep it alive. (This is, or course, impossible.) You then attach electronic signaling/receiving devices so each neuron can communicate with whatever neurons to which it was “attached” (e.g. shared a synapse with) before. So, basically, even though the neurons are now separate, their signaling is exactly the same as it was in the whole brain. Can we still envision a mind rising out of all this?
Well, I dunno… maybe…
But it gets worse. Instead of putting little signaling/receiving devices on each neuron, attach little zappers that that simply fire different amounts of electricity. Now separate these neurons by hundreds of miles. Then fire of each of the zappers so that the neurons fire the exact way the would if the brain’s owner was thinking of a cat. (There’s no signaling going on, just neurons firing in the same order as if they were receiving signals.) Would some entity somewhere suddenly think of a cat?
It seems unlikely doesn’t it? But the individual neurons in all these cases are behaving exactly the same. So this would seem to dispel the possibility of a purely reductionist (e.g. it’s all in the tissue) model of consciousness.
I just stumbled on some general theories that address this issue. They are “Electromagnetic theories of consciousness.” (Link goes to wiki page about it.) The idea is that when you have a bunch of neurons in a brain they are, because of their electrical activity, creating an electromagnetic field. And somehow this field is consciousness. The field is not only created by the brain’s neurons, it affects them as well, so the field and brain effectively pass signals back and forth. The wiki page has details.
The starting point for McFadden and Pockett’s theory is the fact that every time a neuron fires to generate an action potential, and a postsynaptic potential in the next neuron down the line, it also generates a disturbance in the surrounding electromagnetic field. McFadden has proposed that the brain’s electromagnetic field creates a representation of the information in the neurons. Studies undertaken towards the end of the 20th century are argued to have shown that conscious experience correlates not with the number of neurons firing, but with the synchrony of that firing. McFadden views the brain’s electromagnetic field as arising from the induced EM field of neurons. The synchronous firing of neurons is, in this theory, argued to amplify the influence of the brain’s EM field fluctuations to a much greater extent than would be possible with the unsynchronized firing of neurons.
McFadden thinks that the EM field could influence the brain in a number of ways. Redistribution of ions could modulate neuronal activity, given that voltage-gated ion channels are a key element in the progress of axon spikes. Neuronal firing is argued to be sensitive to the variation of as little as one millivolt across the cell membrane, or the involvement of a single extra ion channel. Transcranial magnetic stimulation is similarly argued to have demonstrated that weak EM fields can influence brain activity.
McFadden proposes that the digital information from neurons is integrated to form a conscious electromagnetic information (cemi) field in the brain. Consciousness is suggested to be the component of this field that is transmitted back to neurons, and communicates its state externally. Thoughts are viewed as electromagnetic representations of neuronal information, and the experience of free will in our choice of actions is argued to be our subjective experience of the cemi field acting on our neurons.
I’m not agreeing with this (frankly, I still don’t really understand what electromagnetic fields are) but it does address the problems with the reductionist view.