Archive for the 'Philosophy' Category
March 16th, 2016 by Wil
Not long ago, in an article entitled “What is Morality,” I offered up the argument (not original to me) that moral behavior is built into our brains via evolution. I noted…
We want to believe that by being moral we are following a set of rules — perhaps divine rules, or perhaps rules dictated by some kind of universal logic. But I am saying morality is neither divine nor logical; moral rules are simply the rules of socialization that have evolved through the history of our species. Our brain applies these rules, much the same way it applies rules for emotions. When we are contemplating or performing an immoral action, we are prodded with a sting of discomfort, similar to the sting of fear. When we are contemplating or performing a moral action, we get a “good feeling,” similar to joy or pride.
The idea being that we literally sense which behaviors feel good and which feel bad. At the time I thought this was a fascinating development in moral psychology. But, while reading the book “Soul Machine,” a history of the development of the concept of the mind, I find…
Hutchenson accepted Locke’s argument that sensations created ideas which then furnished the mind, but he also believed with Shaftsbury that an innate moral sense was the primary motivation for humans, and the source of their emotions. Sentiments arose from that moral barometer—joy from acts of charity and remorse from deceit. Through this moral sense, we experienced another’s emotional state deeply and directly. Ethics and social stability rested, not on the Good Book, but on this natural state of shared compassion, what he called “sympathy” between human beings. Like muscles in the body, this shared emotion balanced private desires and yielded both personal and social harmony.
This Hutchenson fellow basically nailed the idea back in the early 1700s. Interestingly, his idea of experiencing others’ emotional states ties into the the recent, still somewhat controversial, discovery of mirror neurons.
The general sense I get with this book is that all the great philosophical thoughts were thunk centuries ago. Now people are just arguing around the edges.
February 28th, 2016 by Wil
Years ago I was looking at the library of my dad’s wife and I noticed a book on rhetoric. I found myself asking, what, exactly, is rhetoric? I associated it with talking and writing but couldn’t say much beyond that.
Anyway, here’s a dictionary definition:
the art of effective or persuasive speaking or writing, especially the use of figures of speech and other compositional techniques.
Once I figured out what rhetoric is, I realized it’s something I do all the time. In my acid logic writings and at this blog I’m often writing opinions which I have some vague interest of convincing other people of.
But lately, I find myself wondering if it’s all bullshit, whether rhetoric is really a way of glossing over the fundamental lack of meaning to most things.
For example, I’m finishing up a piece for the next acid logic where I argue that the soundtracks of 1980s horror and sci-fi movies represented a certain dichotomy: they both embraced technology by using computer based tools and feared it as the sounds you get from synthesizers always have a certain coldness to them. I argue, with a few rhetorical flourishes, that this dichotomy was part of the spirit of times.
But is such a statement really true in any meaningful way? How would it be true? I guess if people of the era really sat around and took notice of this idea and used it to form other ideas it might be true, sort of. But something about these rhetorical arguments seems lacking. It feels like you could make any point about anything with the right rhetorical tools.
It seems like a lot of observation about the past, especially past culture, are made after the moment. They become true because the observation is made. But are they really true? Did they really describe thoughts and behaviors people were consciously or unconsciously thinking at the time? And who really cares?
January 14th, 2016 by Wil
I’ve started reading a book that’s been recommended to me in the past – The Four-Hour Workweek. It’s essentially a self help-book, one that promises to provide strategies the reader can use to generate free time. It has a bit of a P.T. Barnum flavor but makes a fair amount of sense and verbalizes a lot of my thoughts on the empty busyness of modern life, especially in the workplace.
I do find myself wondering why we (as a society and species) are so prone to being busy? Why do we feel the need to accomplish anything at all? (I’m not sure this is universal; I have heard of various primitive societies that don’t feel the urge to do more than what is needed.)
Evolutionary psychology would probably argue something like the following: we realize that our status is tied to our odds for reproduction and thus passing on our genes, so we seek to elevate our status by earning more and gaining credentials. And we live in an era of incredible opportunities for status improvement. We can work hard at the office and generate our income but in our off hours we can also become more skilled by learning another language, or playing in a band, or taking globe trotting vacations that can impress our fellows. I’m not devoid of this kind of obsessive working—currently I have a part time job, several musical projects, a web site, a passing hobby at drawing and an attempt to learn French going on. It does, at times, seem overwhelming and I find myself wondering why am I doing this? The conventional wisdom is something like, “To be a better person.” but what the fuck does that really mean? Why do I care about being a better person?
So I suspect there is something beneath the surface that pushes me, something wired into the psyche from years of man’s evolution.
January 2nd, 2016 by Wil
A while back I was pontificating John Searle’s thought experiment, the Chinese Prisoner’s Dilemma. The nature of this thought experiment is detailed here, but since I hate it when people force me to follow links around I will quote the following description.
He proposes that you have a man locked in a Chinese prison cell. The man does not speak or read Chinese. Chinese characters are passed into his cell, and he draws from his own collection of Chinese characters to “answer.” He eventually gets pretty good at responding with the correct Chinese characters. (Theoretically this would take many lifetimes to learn but this is a thought experiment.) The guy is presumably thinking along the lines of, “whenever I get this character or character set, they seem to like it when I reply with this character or character set.” To the Chinese people on the outside, it seems like the guy in the prison cell understands the conversation but in reality he doesn’t. The prisoner recognizes the designs of the symbols, but not their meaning.
Basically Searle argues that real communication is the following process: An intelligent actor passes a communication to a second intelligent actor. That second actor comprehends the message and then develops an intelligent response which it passes to the first party.
That’s real communication (according to Searle.) Fake communication is what he argues computers or Chinese prisoners do. It’s something like: An intelligent actor passes a communication to a second actor. This second actor may or may not be intelligent. That second actor basically randomly passes back something it doesn’t really understand and the first actor has to make sense of it.
The conceit here is that us human—an intelligent lot—do a lot of real communication whereas computers, Chinese prisoners and other various idiots don’t.
But is it true we humans really communicate? I’ve always been a little bothered by this conceit because it seems like a lot of communication I engage in is sort of knee jerk. I don’t really sit and ruminate on what a person is saying to me and my answer appears out of nowhere and is out of my lips before I consider it. I’ll give an example:
Someone: How’s you doing to day, Wil?
Me: Fine, and you?
I say this no matter how I’m feeling, unless it’s obvious that I’m not doing well. (If my entrails are hanging out of my stomach, for example.) I just have a canned response when someone asks me that question. Pretty much as we understand computers do.
How about this dialogue?
Someone: Are you going to work today?
Me: I am.
This might require a bit more comprehension on my part. I need to think about whether I’m working and give a response.
But “going to work” can really mean several things. One is “are you traveling to work?” Another is “are you going to perform the act of work?” Do I always understand what the person is asking when I answer that question? Not really. Sometimes context fills it in; if I’m on a bus I might presume that they mean “are you traveling to work” (though I could be wrong.)
So my point is here’s an example of communication where I don’t fully understand the question. I only have partial comprehension. And that’s not full communication.
Maybe it’s to much to ask that we have fully understood communication with every bit of dialogue. But I’m wondering if the fact that we don’t points to another possibility: that we really communicate in a more automatic way than we think, much as we presume computers do. Maybe we’re all Chinese prisoners.
September 28th, 2015 by Wil
There’s one idea of late that’s really had a profound effect on my thinking. Unfortunately it’s hard to put into words. (In fact, as you shall see, that is the idea.) Basically it’s the notion that our mental concepts of things are not really things the way real physical things are.
For example, I’m looking at a chair right now. The chair exists in the sense that it is made up of physical matter that exists (barring exotic theories like the universe is a hologram.) But the chair doesn’t really exist as a chair. The idea that this collection of matter exists primarily as a tool for humans (and cats) to sit on* is an unreal idea; it’s a concept of the human mind applied to this collection of matter. If all life on the planet ended, the matter we call this chair might continue but its meaning, its concept, would not.
* Chairs are also good for swinging about in a drunken rage.
But what if we apply this idea (that things and their semantic descriptions are different) to people? Let’s take Elvis. People talk about Elvis the performer and might say he did such and such on some particular date. But people also talk about a more ethereal Elvis—more of a concept of Elvis. The conceptual Elvis is an entity linking various disparate concepts like the South, Hollywood, Rock music, Sexual playfulness (the hips shaking and all that), icon worship and on and on. Some people have a more negative view and link Elvis to White appropriation of Black music and maybe some kind of sexism. But this conceptual Elvis is really quite different from Elvis the guy. Elvis the guy was essentially a collection of matter (e.g. the molecules that made up his body) and perhaps also a consciousness though we still have trouble really defining what that is.
But my real point here is that, like Elvis, we all have conceptual and real versions of ourselves. Other people interact with us and build their conceptualization of us off of those interactions, but also off what other people say about us (true or not) and the various stereotypes (true or not) they apply to us, whether we remind them of their dad, and a whole host of other factors. I’m reminded of the ending of the Michael Douglas film “Falling Down” where Douglas’s character, after shooting up parts of L.A. (in his mind, righteously) finds himself saying, “I’m the bad guy?” He realizes that his concept of himself and everyone else’s concept of himself don’t match up.
To make things more complex, we seem to build our conceptualization of us off these external factors. We think “Everyone says I’m a liar therefore I am.” Or, “I belong to (some particular stereotype) therefore I must act in this or that manner.” Or, “My dad was a violent drunk therefore I must be.” It turns into some infinite feedback loop—you think you’re X and thus behave like X and everyone sees you as X and you become more set in the pattern of X etc.
But in the end, you’re really just some molecules.
September 17th, 2015 by Wil
I’ve gotten a sense over the years of the futility of most debates about politics and related topics—history, philosophy, ethics etc. I can think of very few discussions where I changed someone’s mind or had mine changed. People seem very fixed in their opinions and unwilling to move in the face of evidence.
This may not be entirely unreasonable. I think we all have a certain sense that how evidence is presented can distort reality. For example, someone could say, “A 1998 study showed that people who ate mouse droppings lost weight,” while declining to mention all the studies that did not support this argument or the fact that the particular study that did was rife with methodology errors. We’re smart not to take things at face value.
But sometimes the evidence is pretty solid and people seem unwilling to change. I find myself guilty of this; I read something contrary to my beliefs and I almost feel physically resistant. We want our truth to be the truth. Which is really a matter of ego, I suppose.
I find myself particularly bothered by conspiracy theories. Donald Trump just recently repeated the idea that vaccines cause autism. This idea has been as disproved as possible but refuses to die. Because, I guess, people just want to believe it.
I’ve been reading an interesting book by Micheal Shermer called “The Believing Brain” where he examines why we are so prone to believe things that fly in the face of evidence. It’s stuff you’ve probably heard before: we want control over uncertainty and conspiracy theories give us knowledge which is a stepping stone to control Why’d your kid get autism? The correct answer is: who knows? The psychologically comforting answer is because he was poisoned by vaccines.
If there’s been an overall trend in my thought for the past 8 or so years it’s been that things are pretty uncertain and we basically need to embrace that. As I’ve recounted a million times, I had pretty solid faith in the medical establishment until I came down with a dizziness they could not explain. I had hand pain that lasted for years and was impervious to any number the “fixes” medicine offered. To solve these problems you basically have to stumble around in the dark until you find something. Few experts saw the economic bust of 2008 coming. It seems like nobody predicted the rise of ISIS in the middle east. Did anyone six months ago seriously think Donald Trump would be the leading Republican candidate? The experts on these matters seem to be largely a group of know-nothings*. But if they know nothing, then we know nothing and that’s not solid ground to stand on.
But maybe that’s where we are. And maybe accepting that is the best course of action. Embrace the mystery of life and all that.
*I’m reminded of the study that political pundits are mostly spectacularly wrong in their predictions.
July 27th, 2015 by Wil
There’s a blog called “Wait But Why” that does a good job of taking dense, philosophical topics and presenting them in an amusing but thought-provoking way. The author recently did a very long post on the kind of artificial intelligence technology that many think is around the corner (possibly only a few decades away.) He concedes that AI could radically reshape the course of history, possibly by bringing about a Utopia or possibly by causing humanity’s destruction.
In part II of the post the author gets into the more negative secenerios. A point he makes well is that AI wouldn’t end humanity because the AI is “evil” or out to get us, but rather that humanity’s extinction might just be a logical output of of badly coded commands. A simplistic example would be someone tells an AI to solve world hunger and the computer “thinks”, “Well, only living humans are hungry so if I kill them all, they can’t be hungry.” Again, a very simplistic example, but a much more convoluted one could occur and wipe us out.
So the challenge is that we have to program our morality into the AI. It needs to follow our moral rules. But this is interesting because, as I’ve pointed out before, we really don’t have a clear and concise ruleset. We think we do, the Golden Rule being a good example, but
the Golden Rule is really quite flawed. It’s more true to say we have a general set of moral intuitions that we kind of follow sometimes. Hardly the sort of thing that can be fed to a computer program.
So maybe the great challenge for humanity right now is to create a purely logocal, objective set of moral rules. I personally suspect that is impossible,
June 30th, 2015 by Wil
For a while now I’ve heard of a particular drug that purports to dull the formation of painful memories. I’ve always been a little unclear on how it works but I believe it takes away the emotional sting of the memory while leaving the recollection of the events. Ideally it could aid people who have suffered horrible crimes or soldiers suffering from PSTD. I had not heard of a more controversial use: the pill as a way of ducking emotional damage caused by committing heinous acts, especially in war time. This article, from 2003, describes a scenario.
The artillery this soldier can unleash with a single command to his mobile computer will bring flames and screaming, deafening blasts and unforgettably acrid air. The ground around him will be littered with the broken bodies of women and children, and he’ll have to walk right through. Every value he learned as a boy tells him to back down, to return to base and find another way of routing the enemy. Or, he reasons, he could complete the task and rush back to start popping pills that can, over the course of two weeks, immunize him against a lifetime of crushing remorse. He draws one last clean breath and fires.
That sounds a little overdramatic but makes the point. The rest of the article is a very even handed look at the whole issue. Some might say we can never use the pill in this way as it will destroy our humanity. But the response is that, look, if a killer is wounded during his crime, he still gets medical treatment for his physical wounds. Why would we deny him treatment for his psychological wounds? And if the person is a soldier why should he be doomed to a lifetime of guilt why the politicians who put him in the position get off scot-free*? It’s quite an interesting ethical debate.
* Writing this sentence made me consider how the term “scot-free” came to be. You’d think it was based on some story about a guy named Scot, but not so. it’s derived from an old english term that means exempt from royal tax.
June 19th, 2015 by Wil
In the field of ethics, you often hear discussion of “The Trolley Problem“, a fictional scenario where a person is forced to chose the best outcome from a situation where at least one person is guaranteed to die. I stumbled across this web article which makes the case that the self driving Google car could bring the trolley problem to reality. If you car is headed into a crash should it sacrifice you to save bystanders?
How will a Google car, or an ultra-safe Volvo, be programmed to handle a no-win situation — a blown tire, perhaps — where it must choose between swerving into oncoming traffic or steering directly into a retaining wall? The computers will certainly be fast enough to make a reasoned judgment within milliseconds. They would have time to scan the cars ahead and identify the one most likely to survive a collision, for example, or the one with the most other humans inside. But should they be programmed to make the decision that is best for their owners? Or the choice that does the least harm — even if that means choosing to slam into a retaining wall to avoid hitting an oncoming school bus? Who will make that call, and how will they decide?
I would offer an additional moral question. If my car decides to sacrifice me can it be programmed to quickly and painlessly kill me as opposed to leaving me to the destruction of a car accident?
May 22nd, 2015 by Wil
I’ve been noticing lately how often people’s statements, especially moral statements, really seem to be more about themselves than the topic at hand. “Well I find racism repugnant!” … that sort of thing. The point seldom is to convince others of a viewpoint, but to stake out one’s moral high ground.
I’ve been reading a short book called “Crimes Against Logic” written by a London based philosopher and towards the end he captures this problem nicely.
The idea that sincerity may substitute for reason is founded on an egocentric attitude toward belief: that what I believe is all about me, not about reality. What matters is not that the position I favor will have the best or intended effects, or that the problems I worry about are real or grave, but only that I hold my position from the right sentiments, that I am good.
So how does one avoid this? The trick is to take yourself out of the statement. Say, “racism is bad,” and then explain (the objective) reasons why. But it’s trickier ground requiring heavier thought.