Archive for the 'Philosophy' Category
September 26th, 2014 by Wil
In a recent New York Review of Books article entitled “What Your Computer Can’t Know” philosopher John Searle provides a fairly helpful analysis of the different kinds of knowledge. He states:
The distinction between objectivity and subjectivity looms very large in our intellectual culture but there is a systematic ambiguity in these notions that has existed for centuries and has done enormous harm. There is an ambiguous distinction between an epistemic sense (”epistemic” means having to do with knowledge) and an ontological sense (“ontological” means having to do with existence). In the epistemic sense, the distinction is between types of claims (beliefs, assertions, assumptions, etc.). If I say that Rembrandt lived in Amsterdam, that statement is epistemically objective. You can ascertain its truth as a matter of objective fact. If I say that Rembrandt was the greatest Dutch painter that ever lived, that is evidently a matter of subjective opinion; it is epistemically subjective.
Underlying this epistemological distinction between types of claims is an ontological distinction between modes of existence. Some entities have an existence that does not depend on being experienced (mountains, molecules and tectonic plates are good examples.) Some entities exist only so far as they are experienced (pains, tickles and itches are examples.) This distinction is between the ontologically objective and the ontologically subjective. No matter how many machines may register an itch, it is not really an itch until somebody consciously feels it: it is ontologically subjective.
This seems quite useful and is worth keeping in mind. But I feel there are blurry lines that need to be acknowledged. Let’s look at what a mountain is. We can break that entity into a couple of “parts” – there’s the fact that the mountain is there in some objective sense (some people who question the very nature of reality might dispute this point) and then there is my observation of the mountain, my act of seeing* the mountain. The first part is objective, the second subjective. But let’s now look at an itch. An itch is similar to pain and caused by some minor degradation of your physical body. Maybe a bug bit you, maybe a wound is healing. The actual sensation of the itch is your sensory awareness of this degradation. So again, there are two components—the objective part (the biting bug or whatever it is) and the subjective (the itchy feeling.) My point being that a mountain and an itch are not all that different; they share these two components. An itch is really just a way of sensing the thing that attacked your skin.
* Sight is really the only sense that allows one to get a clear representation of a mountain. There are other objects, however, that one can use various senses to appreciate. A lasagna can be seen, smelled and tasted for example.
And going to the first quoted paragraph: We can say that it is objective to say that Rembrandt lived in Amsterdam, but is it really? It is dependent on us agreeing to the human convention that this particular place on earth is called Amsterdam and that this particular bundle of historical matter was called Rembrandt. For the statement to be true we (the observer of the statement) need to agree to various taxonomies. If I could get everyone to agree on some system by which we could judge art, I might very well be able to objectively claim that Rembrandt was the greatest Dutch painter. That really is the difference between the two statements: how may people agree on the terms. (There is near universal agreement on terms like “Rembrandt” and “Amsterdam, less so on “great painter.”)
It may seem I’m trying to be difficult here, but I’m merely pointing out how hard it is to really define these terms.
September 1st, 2014 by Wil
As astute readers are doubtless aware, I recently posted a post entitled “What is Real?” in which I posited that most everything we encounter does not exist. My point wasn’t that physical matter doesn’t exist (though maybe it doesn’t) but that the objects we group matter into are not objectively real but are formed from subjective, man-made categories. So the atoms and molecules* that make up a cup, or a cat, or a hat exist, but the objects—cups, cats, and hats—are dependent on humans for their existence.
*Of course, “atoms” and “molecules” are themselves man made categories.
Todays I stumbled unto an interesting rumination by someone named Alan Lightman. He’s titled his piece “My Own Personal Nothingness” and uses it to explore this “nothing is real” conceit. At one point he argues that institutions—churches, organizations, governmental bodies, political parties—are not real. They, as everything else, exist only in the minds of men.
Likewise, our human-made institutions. We endow our art and our cultures and our codes of ethics and our laws with a grand and everlasting existence. We give these institutions an authority that extends far beyond ourselves. But in fact, all of these are constructions of our minds. That is, these institutions and codes and their imputed meanings are all consequences of exchanges between neurons, which in turn are simply material atoms. They are all mental constructions. They have no reality other than that which we give them, individually and collectively.
This might seem an innocuous, even boring statement but, as I’ve argued elsewhere, this has huge ramifications for morality. In modern society, if you commit an evil act, you are (hopefully) caught and placed in prison. But if our institutions of law and morality are mere figments of our imagination, how do we know how to objectively separate right from wrong?
But there’s also something a bit freeing about the notion that institutions are meaningless. It means that we really don’t owe institutions any fealty, that we shouldn’t concern ourselves with them. I know a lot of people who defer greatly to their political party (and roundly condemn anyone who doesn’t support their party) and when a politician of that party is caught in a wrongdoing—say, fornicating with a goat—it pains these people. But these institutions only have the power we give them. I used to be much more deferential to academics or doctors and the institutions they represent, but I now see they’re taking guesses about how life works, just like the rest of us. (I don’t want to overplay this statement—I defer to doctors more than astrologers, but I recognize they aren’t perfect.)
Anyway, Lightman’s short piece is worth reading as are the comments it has generated.
August 26th, 2014 by Wil
Science writer Nicholas Wade recently wrote a book about the role of race in the development of human culture. According to his thesis, the different races possess more or less of certain collections of genes and some of these genes are responsible for human behavior therefore certain races are genetically predisposed towards certain behaviors*. This is controversial because it implies that in some sense, races can’t change and efforts to help them do so may be doomed to failure.
* Summarizing what Wade is saying is close to impossible and I’m sure people could niggle with what I’m saying here but I feel it’s close enough for this discussion.
Many people disagree with Wade’s theory and one frequent rebuttal is that race itself doesn’t exist—it is, they frequently say, a “social construct.” By this they mean, the division of race has no real meaning in nature. For example, the term species divides animal groups who can’t reproduce with each other. In that sense, species is a real term. But race is much harder to define. Different races (called sub-species by people who debate this stuff) can have sex with each. One might point out the differences of skin color and appearance between different races but that gets messy quickly. There are plenty of light skinned blacks or Asian looking Caucasians etc.
In this sense, I tend to agree that race is a social construct. But, as you think about it, so in pretty much everything. Words have meaning because enough of us got together and agreed they have meaning. If we didn’t all agree that a cup was a cup and that its purpose was to hold things to drink, it wouldn’t be a cup. If everyone on earth died then cups would no longer exist. They might exist in the sense that their matter would still exist (assuming the earth wasn’t destroyed or what have you) but as an object—a category—cups would be extinct. The definition of cups is a man made distinction which has no objective meaning.
(Of course, definitions are kind of blurry. Some people might look at a tall cup and claim it’s a flower vase. And we also hear about weird German words that have no translation in English.)
This reminds me of a few tidbits I’ve read in relation to Buddhist thought. There is a notion there that you can experience an object before you apply all the man-made definitions and correlations related to it. I suppose we all do this for a nanosecond before we mentally identify an object. For the briefest of moment, before you identify a cup, you experience it as some undefined thing. (This moment is so fast it’s questionable whether you can say we “experience it” but there you have it.)
Watching my dad and his wife, both in their 90s, I see a certain breakdown of this system of categories, this taxonomy, that we apply to everything around us. They might be baffled by what a fairly basic object is, or they might understand it but mislabel it; there’s a lot of calling things with words that rhyme with the real name—cup could become pup for example. I suppose this is what life was like when we were babies—everything was just a thing, and often we probably couldn’t even differentiate between things. A newspaper next to an apple next to a kitten was just a pile of “stuff” in our new minds.
This leads to an interesting point. What babies and demented people have in common are essentially brains that don’t categorize well. The neurons of their brains have limited connections (either because the connections haven’t formed yet as in the case of babies, or because they have deteriorated as in the case of older adults.) This would imply that the meaning we apply to the objects we encounter is literally wired into our brains. It’s the structure of our brains that applies meaning. From this one can presume that a brain structured differently would find different meanings in the world. (Say the brain of an autistic child. Or an alien. Or a sentient computer.)
It really leads to the question of “what is real.” Our words are not real. Our categories are not real. The only thing really real that I can see is the physical matter of the universe. Even the distinctions between these bits of matter (e.g. molecules, atoms, electrons, quarks etc.) are not really real.
This is heavy shit to think about. It’s giving me a headache.
July 22nd, 2014 by Wil
I’ve mentioned in the past that while I agree with atheism I find the notion that you can have morality without religion to be, well, less obvious than many make it out to be. (I tackled this idea in detail here.)
A lot of secular humanists point to the golden rule as an easy source for morality. That rule is, of course, “do unto others as you would have them do unto you.” Over at Andrew Sullivan’s blog a reader makes the case.
…we also have deeply ethical atheists, agnostics, and secularists who debate the fine points of moral behavior with as much rigor and passion as theologists do, and who are building great ethical revolutions such as environmentalism on the surprisingly robust foundation of a practical, secular ethics.
Much of this success rests on the self-explanatory Golden Rule. No fear of damnation is needed to explain why it’s a good idea to treat others as you would like to be treated. It’s a contract, and you get security and stability only if you obey it. The obviousness of this contract also makes it a firm basis for moral innovation.
The problem is that while the golden rule might work some of the time, it really doesn’t work all of the time. The idea is that if I don’t want to be screwed, I shouldn’t screw others. But really you just don’t want others to know you’ve screwed them. If you can screw over other people without them knowing it, then you get all the benefits of the golden rule, plus a little extra for yourself. Also, the premise of the golden rule is that your security and safety will be harmed if you violate the golden rule. But what if I am strong enough that I cannot be harmed? Say I’m a king, or some kind of mafia boss? Then I can break the golden rule with at least some impunity and not fear for my security. As an incentive for morality, the golden rule does not work consistently and seems to have many caveats. Counter to the writer above, there are cases where one can get security and stability without obeying the golden rule.
There’s a third complaint I’d make which is the golden rule isn’t really moral in any kind of purist sense. According to the golden rule, you should treat others well not because you really want to but because you wanted to be treated well. It’s selfish. This may be acceptable, but I think the realization takes a bit of the wind out of the sails of people like the above person who righteously tout the golden rule as something almost holy.
Is religion the way to morality then? As I’ve said in the past, even it is flawed. The Christian argument is that one should be good to avoid burning in hell. Again, the is really a selfish argument: Do this to avoid pain (and lots of it!)
I do suspect morality evolved as a social practice that tended to work for most of those who engaged in it. Those who followed the golden rule flourished and were successful at passing on their genes etc. I presume it is, in some hard to imagine way, encoded into our genes. But morality and the golden rule are not really “logical” in any sense.
June 26th, 2014 by Wil
I continue to read Alan Harrington’s “The Immortalist.” One of the books argument is that man, faced with the modern observation that god is dead, tries to achieve immortality by becoming famous, thus ensuring that he (man, not god) will not be forgotten. We do this not consciously, of course; this drive for celebrity and status is buried somewhere in the nether-regions of the subconscious. This leads to a certain kind of craziness as Harrington notes in one paragraph:
Middle-class people in particular have always competed for the god’s notice, but today, with religious authority on the wane, this competition has become frantic, in some arenas unbearable so. We have a merciless obsession with accomplishment. Millions are caught up in the neurotic new faith that a human being must succeed or die. For such individuals it is not enough to enjoy life, or simply do a good job or be a good person. No, the main project, pushing all other concerns in the background, is to make a name that the gods will recognize.
I have to say this summarizes my internal battles explicitly. On one hand I derive pleasure by obtaining skills—musicianship, writing, drawing, speaking foreign languages, being a skilled lover etc.—but other the other I realize the fruitlessness of it all. These skill have little value in the job marketplace, they are only good for generating a certain kind of respect. But why earn respect? I suppose Harrington would argue because on some level I feel it will lead to some form of immortality. But if that is a false belief, as it almost certainly is, shouldn’t I just chill out and enjoy life?
He has an interesting phrase in there: “succeed or die.” It sounds very Darwinian. I would if this human obsession with skills and accomplishment became stronger after Darwin put forth his “survival of the fittest” theory?
June 16th, 2014 by Wil
First of all, I’m back in the saddle again so to speak. Was out of town for several weeks and neglected blogging.
While away I read most of a book I’ve been meaning to tackle: “Brainwashed – The Seductive Appeal of Mindless Neuroscience.” It’s a book coming from the “neuroskeptic” school—a viewpoint arguing that many of the claims neuroscience makes are inflated. It’s hard to argue with that basic point; you do see seemingly unlikely predictions coming out of neuroscience (and science) all the time. But that said, I find the book rather mushy. I just read through the chapter on free will and found it hard to follow the arguments. Sam Harris’s eBook called “Free Will” seems more cogently argued. (He argues against the existence of free will, the opposite view of “Brainwashed.”)
The free will chapter did have an interesting anecdote about Martin Lerner, a sociologist who developed the idea that people like to believe in a “just world hypothesis” (e.g. that the good are rewarded and the bad punished.) It seems a harmless enough delusion, but what if we alter our perception of the world to map it to a just world. And in doing so, what if we presume people who suffer deserve to suffer? The book states…
In one of his seminal experiments, Lerner asked subjects to observe a ten-minute video of a fellow student as she underwent a learning experiment involving memory. The student was strapped into an apparatus sprouting electrode leads and allegedly received a painful shock whenever she answered a question incorrectly (she was not receiving real shocks, of course, but believably feigned distress as if she were). Next, the researchers split the observers into groups. One was to vote on whether to remove the victim from the apparatus and reward her with money for correct answers. All but one voted to rescue her. The experimenters told another group of observers that the victim would continue to receive painful shocks; there was no option for compensation. When asked to evaluate the victim at this point, subjects in the victim-compensated condition rated her more favorably (e.g. more “attractive,” more “admirable”) than did subjects in the victim-uncompensated condition, in which the victim’s suffering was greater.)
I think it’s possible to extrapolate too much from these kinds of experiments, but this does kind of jibe with my sense of the world. We see someone suffering for whom we can do nothing and as a result we lower our opinion of them, basically saying, “sucks to be you!”
Ha! Humans are scum!
April 28th, 2014 by Wil
I’m working on my next article for acid logic and it’s essentially a list of modern day fears that I think could be exploited by horror movies creators. One fear is fairly esoteric: a fear of the loss of identity brought about by the hyper-connectedness of the age. In essence, we are so hooked in to each other that when a subject comes up we immediately know what everyone else thinks about it and tailor our opinions and ideas to match the group we want to associate with. (Political tribes are an obvious example of these groups.)
The fear is not so much about this process but the crisis of self it could bring about. If you wake up one day and find that your opinions totally match some subset of the masses, would you start to wonder whether you really exist on a meaningful level? Would you conceive of yourself as merely a vessel for popular opinion?
April 25th, 2014 by Wil
In one of my recent articles on heretical ideas I noted that a) I’m an atheist, and b) I don’t think there is any way to divine morality. This puts me at odds with most of the “New Atheists” like Sam Harris and Richard Dawkins who argue we can have morality without God.
There’s an interesting article in the Spectator by a writer named Theo Hobson. He is presumably religious and takes a rather snippy tone to atheists. But I think he makes some points that complement my own and his piece is worth reading if you need something philosophical to curl up with.
In these paragraphs he points out something I’ve thought about. New atheists dismiss faith, but their insistence that some kind of moral rule can be ascertained sounds an awful lot like faith itself.
The trouble is that too many atheists simply assume the truth of secular humanism, that it is the axiomatic ideology: just there, our natural condition, once religious error is removed. They think morality just comes naturally. It bubbles up, it’s instinctive, not taught as part of a cultural tradition. In The God Delusion Richard Dawkins tries to strengthen this claim using his biological expertise, arguing that humans have evolved to be altruistic because it ultimately helps their genes to survive. But in the end, he admits that no firm case can be made concerning the evolutionary basis of morality. He’s just gesturing with his expertise, rather than really applying it to the issue at hand.
Here’s his muddle. On one hand he believes that morality, being natural, is a constant thing, stable throughout history. On the other hand, he believes in moral progress. To square the circle he plunges out of his depth, explaining that different ages have different ideas of morality, and that in recent times there has happily been a major advance in our moral conventions: above all, the principle of equality has triumphed. Such changes ‘certainly have not come from religion’, he snaps. He instead points to better education about our ‘common humanity with members of other races and with the other sex — both deeply unbiblical ideas that come from biological science, especially evolution’. But biological science, especially evolution, can be used to authorise eugenics and racism. The real issue is the triumph of an ideology of equality, of humanism. Instead of asking what this tradition is, and where it comes from, he treats it as axiomatic. This is just the natural human morality, he wants us to think, and in our times we are fortunate to see a particularly full expression of it.
It’s interesting that he argues that new atheists feel moral truth is “instinctive.” I tackled this very premise in my article.
Another New Atheist, Sam Harris, hints at something similar in this Big Think video when he says (after arguing that we don’t need God for morality) that we have “some very serviceable intuitions about what good and evil are.” The problem, however, is that feelings and intuitions (programmed into us via evolution or not) are not a logical means from which we can define moral behavior. Most of us would agree that proposing the murder of a 10 week old baby feels wrong, but that doesn’t mean it can be logically shown to be so. We can even construct scenarios where killing the baby is the right thing to do for the greater good (say, the baby is the carrier of a deadly disease that cannot be allowed to spread). In such cases, killing the baby might be the right thing to do (according to conventional ethics) but I think we all know that it would still feel awful to carry out the act. From that we must conclude that feelings/intuitions are not a trustworthy source of divining morals.
April 10th, 2014 by Wil
Today in my readings I came across mention of something I’d never heard of: breatharians. These are people who believe they can live without food by subsisting on air and sunlight. It sounds insane of course, but a google search reveals plenty of conversation about the topic. How do they do it? Well, for the most part they don’t.
In 1983, most of the leadership of the movement in California resigned when Wiley Brooks, a notable proponent of breatharianism, was caught sneaking into a hotel and ordering a chicken pie.[
Mmmm… chicken pie.
Under controlled conditions where people are actually watched to see if they can do it without sneaking some real food, they fail. The name most commonly associated with breatharianism is Australia’s Jasmuheen (born Ellen Greve), who claims to live only on a cup of tea and a biscuit every few days. However, the only supervised test of this claim (by the Australian edition of 60 Minutes in 1999) left her suffering from severe dehydration and the trial was halted after four days, despite Greve’s insistence she was happy to continue. She claimed the failure was due to being near a city road, leading to her being forced to breathe “bad air”. She continued this excuse even after she was moved to the middle of nowhere.
The various forms of human insanity seem to have no limits.
April 5th, 2014 by Wil
I’ve been reading an interesting book on twin studies called “Identically Different.” It gets a reader up to up to date on the current analysis of what kinds of human behavior can be attributed to genes. The book is broken up into chapters such as “The Happiness Gene,” “The Talent Gene” etc. (I should make clear author is far from an absolutist who believes genes are the dominant force in our lives; he subscribes to the mainstream belief that our behavior is a combination between nature and nature.)
One chapter is “The God Gene.” It explores the idea that some part of our brain is wired to believe in God or at least something “greater”. The author is not the first to make this argument. (I’ve commented on similar material here.)
Part of how scientists study this sort of thing is by asking people to fill out self-surveys on their religiosity. And here I have a small beef with the process. The author describes two questions on a multi-question survey.
I believe that all life depends of some spiritual order or power that cannot be completely explained — true or false.
Often, when I look at an ordinary thing, something wonderful happens— I get the feeling I am seeing it fresh for the first time —- true or false.
If I’m interpreting this correctly, answering false to these questions would be a marker for atheism, and marking true would imply spirituality.
I would answer true for the first and true for the second. (I can’t really claim to be blown away in these moments of personal beauty, but, yeah, sometimes I am struck by the beauty of things.) But I don’t really see this as contradictory. Everything I’ve seen about the universal seems to imply a lack of God (in the conventional religious sense.) But I don’t think that means I can’t be spiritual in so much as enjoying the grandeur of universe. And I am willing to concede that there could be a certain kind of greater consciousness in the universe (as I described here.)
The questions set up a battle—atheism versus spirituality—that I don’t think is necessary. I will say, however, that while I come out in favor of spirituality here, that doesn’t mean I buy into the vast wastelands of idiocy that are often touted as spirituality—channeling aliens and all that rot.