Archive for the 'Philosophy' Category

Free time

I’ve started reading a book that’s been recommended to me in the past – The Four-Hour Workweek. It’s essentially a self help-book, one that promises to provide strategies the reader can use to generate free time. It has a bit of a P.T. Barnum flavor but makes a fair amount of sense and verbalizes a lot of my thoughts on the empty busyness of modern life, especially in the workplace.

I do find myself wondering why we (as a society and species) are so prone to being busy? Why do we feel the need to accomplish anything at all? (I’m not sure this is universal; I have heard of various primitive societies that don’t feel the urge to do more than what is needed.)

Evolutionary psychology would probably argue something like the following: we realize that our status is tied to our odds for reproduction and thus passing on our genes, so we seek to elevate our status by earning more and gaining credentials. And we live in an era of incredible opportunities for status improvement. We can work hard at the office and generate our income but in our off hours we can also become more skilled by learning another language, or playing in a band, or taking globe trotting vacations that can impress our fellows. I’m not devoid of this kind of obsessive working—currently I have a part time job, several musical projects, a web site, a passing hobby at drawing and an attempt to learn French going on. It does, at times, seem overwhelming and I find myself wondering why am I doing this? The conventional wisdom is something like, “To be a better person.” but what the fuck does that really mean? Why do I care about being a better person?

So I suspect there is something beneath the surface that pushes me, something wired into the psyche from years of man’s evolution.

Automatic communication

A while back I was pontificating John Searle’s thought experiment, the Chinese Prisoner’s Dilemma. The nature of this thought experiment is detailed here, but since I hate it when people force me to follow links around I will quote the following description.

He proposes that you have a man locked in a Chinese prison cell. The man does not speak or read Chinese. Chinese characters are passed into his cell, and he draws from his own collection of Chinese characters to “answer.” He eventually gets pretty good at responding with the correct Chinese characters. (Theoretically this would take many lifetimes to learn but this is a thought experiment.) The guy is presumably thinking along the lines of, “whenever I get this character or character set, they seem to like it when I reply with this character or character set.” To the Chinese people on the outside, it seems like the guy in the prison cell understands the conversation but in reality he doesn’t. The prisoner recognizes the designs of the symbols, but not their meaning.

Basically Searle argues that real communication is the following process: An intelligent actor passes a communication to a second intelligent actor. That second actor comprehends the message and then develops an intelligent response which it passes to the first party.

That’s real communication (according to Searle.) Fake communication is what he argues computers or Chinese prisoners do. It’s something like: An intelligent actor passes a communication to a second actor. This second actor may or may not be intelligent. That second actor basically randomly passes back something it doesn’t really understand and the first actor has to make sense of it.

The conceit here is that us human—an intelligent lot—do a lot of real communication whereas computers, Chinese prisoners and other various idiots don’t.

But is it true we humans really communicate? I’ve always been a little bothered by this conceit because it seems like a lot of communication I engage in is sort of knee jerk. I don’t really sit and ruminate on what a person is saying to me and my answer appears out of nowhere and is out of my lips before I consider it. I’ll give an example:

Someone: How’s you doing to day, Wil?

Me: Fine, and you?

I say this no matter how I’m feeling, unless it’s obvious that I’m not doing well. (If my entrails are hanging out of my stomach, for example.) I just have a canned response when someone asks me that question. Pretty much as we understand computers do.

How about this dialogue?

Someone: Are you going to work today?

Me: I am.

This might require a bit more comprehension on my part. I need to think about whether I’m working and give a response.

But “going to work” can really mean several things. One is “are you traveling to work?” Another is “are you going to perform the act of work?” Do I always understand what the person is asking when I answer that question? Not really. Sometimes context fills it in; if I’m on a bus I might presume that they mean “are you traveling to work” (though I could be wrong.)

So my point is here’s an example of communication where I don’t fully understand the question. I only have partial comprehension. And that’s not full communication.

Maybe it’s to much to ask that we have fully understood communication with every bit of dialogue. But I’m wondering if the fact that we don’t points to another possibility: that we really communicate in a more automatic way than we think, much as we presume computers do. Maybe we’re all Chinese prisoners.

You aren’t real. Neither was Elvis.

There’s one idea of late that’s really had a profound effect on my thinking. Unfortunately it’s hard to put into words. (In fact, as you shall see, that is the idea.) Basically it’s the notion that our mental concepts of things are not really things the way real physical things are.

For example, I’m looking at a chair right now. The chair exists in the sense that it is made up of physical matter that exists (barring exotic theories like the universe is a hologram.) But the chair doesn’t really exist as a chair. The idea that this collection of matter exists primarily as a tool for humans (and cats) to sit on* is an unreal idea; it’s a concept of the human mind applied to this collection of matter. If all life on the planet ended, the matter we call this chair might continue but its meaning, its concept, would not.

* Chairs are also good for swinging about in a drunken rage.

But what if we apply this idea (that things and their semantic descriptions are different) to people? Let’s take Elvis. People talk about Elvis the performer and might say he did such and such on some particular date. But people also talk about a more ethereal Elvis—more of a concept of Elvis. The conceptual Elvis is an entity linking various disparate concepts like the South, Hollywood, Rock music, Sexual playfulness (the hips shaking and all that), icon worship and on and on. Some people have a more negative view and link Elvis to White appropriation of Black music and maybe some kind of sexism. But this conceptual Elvis is really quite different from Elvis the guy. Elvis the guy was essentially a collection of matter (e.g. the molecules that made up his body) and perhaps also a consciousness though we still have trouble really defining what that is.

But my real point here is that, like Elvis, we all have conceptual and real versions of ourselves. Other people interact with us and build their conceptualization of us off of those interactions, but also off what other people say about us (true or not) and the various stereotypes (true or not) they apply to us, whether we remind them of their dad, and a whole host of other factors. I’m reminded of the ending of the Michael Douglas film “Falling Down” where Douglas’s character, after shooting up parts of L.A. (in his mind, righteously) finds himself saying, “I’m the bad guy?” He realizes that his concept of himself and everyone else’s concept of himself don’t match up.

To make things more complex, we seem to build our conceptualization of us off these external factors. We think “Everyone says I’m a liar therefore I am.” Or, “I belong to (some particular stereotype) therefore I must act in this or that manner.” Or, “My dad was a violent drunk therefore I must be.” It turns into some infinite feedback loop—you think you’re X and thus behave like X and everyone sees you as X and you become more set in the pattern of X etc.

But in the end, you’re really just some molecules.

Nothing is certain

I’ve gotten a sense over the years of the futility of most debates about politics and related topics—history, philosophy, ethics etc. I can think of very few discussions where I changed someone’s mind or had mine changed. People seem very fixed in their opinions and unwilling to move in the face of evidence.

This may not be entirely unreasonable. I think we all have a certain sense that how evidence is presented can distort reality. For example, someone could say, “A 1998 study showed that people who ate mouse droppings lost weight,” while declining to mention all the studies that did not support this argument or the fact that the particular study that did was rife with methodology errors. We’re smart not to take things at face value.

But sometimes the evidence is pretty solid and people seem unwilling to change. I find myself guilty of this; I read something contrary to my beliefs and I almost feel physically resistant. We want our truth to be the truth. Which is really a matter of ego, I suppose.

I find myself particularly bothered by conspiracy theories. Donald Trump just recently repeated the idea that vaccines cause autism. This idea has been as disproved as possible but refuses to die. Because, I guess, people just want to believe it.

I’ve been reading an interesting book by Micheal Shermer called “The Believing Brain” where he examines why we are so prone to believe things that fly in the face of evidence. It’s stuff you’ve probably heard before: we want control over uncertainty and conspiracy theories give us knowledge which is a stepping stone to control Why’d your kid get autism? The correct answer is: who knows? The psychologically comforting answer is because he was poisoned by vaccines.

If there’s been an overall trend in my thought for the past 8 or so years it’s been that things are pretty uncertain and we basically need to embrace that. As I’ve recounted a million times, I had pretty solid faith in the medical establishment until I came down with a dizziness they could not explain. I had hand pain that lasted for years and was impervious to any number the “fixes” medicine offered. To solve these problems you basically have to stumble around in the dark until you find something. Few experts saw the economic bust of 2008 coming. It seems like nobody predicted the rise of ISIS in the middle east. Did anyone six months ago seriously think Donald Trump would be the leading Republican candidate? The experts on these matters seem to be largely a group of know-nothings*. But if they know nothing, then we know nothing and that’s not solid ground to stand on.

But maybe that’s where we are. And maybe accepting that is the best course of action. Embrace the mystery of life and all that.

*I’m reminded of the study that political pundits are mostly spectacularly wrong in their predictions.

Can AI be programmed to be moral?

There’s a blog called “Wait But Why” that does a good job of taking dense, philosophical topics and presenting them in an amusing but thought-provoking way. The author recently did a very long post on the kind of artificial intelligence technology that many think is around the corner (possibly only a few decades away.) He concedes that AI could radically reshape the course of history, possibly by bringing about a Utopia or possibly by causing humanity’s destruction.

In part II of the post the author gets into the more negative secenerios. A point he makes well is that AI wouldn’t end humanity because the AI is “evil” or out to get us, but rather that humanity’s extinction might just be a logical output of of badly coded commands. A simplistic example would be someone tells an AI to solve world hunger and the computer “thinks”, “Well, only living humans are hungry so if I kill them all, they can’t be hungry.” Again, a very simplistic example, but a much more convoluted one could occur and wipe us out.

So the challenge is that we have to program our morality into the AI. It needs to follow our moral rules. But this is interesting because, as I’ve pointed out before, we really don’t have a clear and concise ruleset. We think we do, the Golden Rule being a good example, but
the Golden Rule is really quite flawed
. It’s more true to say we have a general set of moral intuitions that we kind of follow sometimes. Hardly the sort of thing that can be fed to a computer program.

So maybe the great challenge for humanity right now is to create a purely logocal, objective set of moral rules. I personally suspect that is impossible,

The anti-guilt pill

For a while now I’ve heard of a particular drug that purports to dull the formation of painful memories. I’ve always been a little unclear on how it works but I believe it takes away the emotional sting of the memory while leaving the recollection of the events. Ideally it could aid people who have suffered horrible crimes or soldiers suffering from PSTD. I had not heard of a more controversial use: the pill as a way of ducking emotional damage caused by committing heinous acts, especially in war time. This article, from 2003, describes a scenario.

The artillery this soldier can unleash with a single command to his mobile computer will bring flames and screaming, deafening blasts and unforgettably acrid air. The ground around him will be littered with the broken bodies of women and children, and he’ll have to walk right through. Every value he learned as a boy tells him to back down, to return to base and find another way of routing the enemy. Or, he reasons, he could complete the task and rush back to start popping pills that can, over the course of two weeks, immunize him against a lifetime of crushing remorse. He draws one last clean breath and fires.

That sounds a little overdramatic but makes the point. The rest of the article is a very even handed look at the whole issue. Some might say we can never use the pill in this way as it will destroy our humanity. But the response is that, look, if a killer is wounded during his crime, he still gets medical treatment for his physical wounds. Why would we deny him treatment for his psychological wounds? And if the person is a soldier why should he be doomed to a lifetime of guilt why the politicians who put him in the position get off scot-free*? It’s quite an interesting ethical debate.

* Writing this sentence made me consider how the term “scot-free” came to be. You’d think it was based on some story about a guy named Scot, but not so. it’s derived from an old english term that means exempt from royal tax.

How does your self-driving car handle the Trolley Problem?

In the field of ethics, you often hear discussion of “The Trolley Problem“, a fictional scenario where a person is forced to chose the best outcome from a situation where at least one person is guaranteed to die. I stumbled across this web article which makes the case that the self driving Google car could bring the trolley problem to reality. If you car is headed into a crash should it sacrifice you to save bystanders?

How will a Google car, or an ultra-safe Volvo, be programmed to handle a no-win situation — a blown tire, perhaps — where it must choose between swerving into oncoming traffic or steering directly into a retaining wall? The computers will certainly be fast enough to make a reasoned judgment within milliseconds. They would have time to scan the cars ahead and identify the one most likely to survive a collision, for example, or the one with the most other humans inside. But should they be programmed to make the decision that is best for their owners? Or the choice that does the least harm — even if that means choosing to slam into a retaining wall to avoid hitting an oncoming school bus? Who will make that call, and how will they decide?

I would offer an additional moral question. If my car decides to sacrifice me can it be programmed to quickly and painlessly kill me as opposed to leaving me to the destruction of a car accident?

No one cares what YOU think

I’ve been noticing lately how often people’s statements, especially moral statements, really seem to be more about themselves than the topic at hand. “Well I find racism repugnant!” … that sort of thing. The point seldom is to convince others of a viewpoint, but to stake out one’s moral high ground.

I’ve been reading a short book called “Crimes Against Logic” written by a London based philosopher and towards the end he captures this problem nicely.

The idea that sincerity may substitute for reason is founded on an egocentric attitude toward belief: that what I believe is all about me, not about reality. What matters is not that the position I favor will have the best or intended effects, or that the problems I worry about are real or grave, but only that I hold my position from the right sentiments, that I am good.

So how does one avoid this? The trick is to take yourself out of the statement. Say, “racism is bad,” and then explain (the objective) reasons why. But it’s trickier ground requiring heavier thought.

Damasio, Jaynes and Sarno

In past writings I’ve mentioned my excitement when I first read Antonio Damasio’s neuroscience tome “Decarte’s Error.” In that book Damasio laid out his observations that emotions are really physical sensations, particularly sensations of our internal body: guts, lungs, circulation etc. If you take away the physical sensation of an emotion you take away that emotion’s “sting.” (One way to mitigate a negative emotional state is, of course, through booze and drugs which bring about a pleasant body high. Not that I advocate such activities.)

I’ve also mentioned that I’ve recently been reading Julian Jaynes’ “The Origin of Consciousness.” In the chapter I just finished he examines the famous Greek stories The Iliad and The Odyssey. He argues that several of the Greek words frequently used in these stories have been mistranslated. Words such as thumos and phrenes have been translated to mean soul and heart (in the figurative sense) respectively but he argues they refer more correctly to particular sensation of the body, exactly the sort of sensations Damasio wrote about. (Jaynes believes thumos, for example, really refers to the sensations present in the activation of the body’s stress response: increased blood pressure, increased energy etc. Basically, being “amped up.”)

Essentially, Jaynes argues that in the Greek era people were much more conscious* of their body state. When modern people say, “I feel angry” they are only tangentially aware of their erratic heartbeat and hot face, whereas ancient people, Jaynes argues, were acutely aware of their physiological state. He also alleges that people didn’t always feel “ownership” of these emotional states, e.g. they were aware of the sensations but did not ascribe the sensations to a particular self (the way we do.) But that’s a more complex discussion.

* Well, this isn’t entirely true as Jaynes famously argues in the book that for some parts of history men weren’t conscious at all! I use the word “conscious” as a synonym for “aware” here.

I’ve also talked much in the past of Dr. John Sarno’s notion that much recurring pain, gastrointestinal issues and other maladies are actually caused by a distraught subconscious. Jaynes hints at the very same idea with no knowledge (to my knowledge) or Sarno’s work.

I think it is obvious to the medical reader that these matters we are discussing under the topic of the preconscious hypostases have a considerable bearing on any theory of psychosomatic disease. In the thumos, phrenes, kradie and etor we have covered the four major target systems, of such illnesses. And that they compose the very groundwork of consciousness, a primitive partial type on consciousizing, has important consequences in medical theory.

Rethinking The Fountainhead

A while back there was an interesting blog post on Andrew Sullivan’s site (written by a guest writer) tying Ayn Rand’s book The Fountainhead in to the issues surrounding the hacking of the Sony Corporation. Rand’s writing is, of course, often lauded by libertarian free market types. This post had a different take… (Warning: Major Spoiler Alert about The Fountainhead.)

The problem of willingly selling out to the Chinese reminded me of Ayn Rand, whose bracing moral lessons I’m sure Freddie had in the back of his mind. Rand’s finest novel,The Fountainhead, is an anti-capitalist screed about the spiritual and cultural evil of catering to market demand. Forget the problem of giving the commie censors what they want. It’s wrong to give the free market what it wants, when what it wants is aesthetically debased, which it always is. The architect hero of The Fountainhead, Howard Roark, is the ultimate in spine, the patron saint of never selling out. When one of his perfect, austere modernist buildings is bowdlerized the better to suit the public taste, he blows it up. That’s right, Howard Roark is a terrorist, a jihadi for artistic integrity.

This is the first time in writing I’ve ever seen someone wrestle with what I always found confounding about the novel. When I read the book, I was struck by how anti-libertarian Roark’s actions seems; he shows no respect for property rights when he blows up the building. I assumed it was a kind of glitch in the philosophy of the book but it could be that it is the philosophy of the book. It does, at least, present the trait I’ve always liked about Rand: love her or hate her she clearly did not give a shit what anyone else thought, so much so that she present a character who is essentially a terrorist as a hero. (I believe I’m correct that no one is actually killed when the building is destroyed as he does it late at night.)