September 26th, 2014 by Wil
In a recent New York Review of Books article entitled “What Your Computer Can’t Know” philosopher John Searle provides a fairly helpful analysis of the different kinds of knowledge. He states:
The distinction between objectivity and subjectivity looms very large in our intellectual culture but there is a systematic ambiguity in these notions that has existed for centuries and has done enormous harm. There is an ambiguous distinction between an epistemic sense (”epistemic” means having to do with knowledge) and an ontological sense (“ontological” means having to do with existence). In the epistemic sense, the distinction is between types of claims (beliefs, assertions, assumptions, etc.). If I say that Rembrandt lived in Amsterdam, that statement is epistemically objective. You can ascertain its truth as a matter of objective fact. If I say that Rembrandt was the greatest Dutch painter that ever lived, that is evidently a matter of subjective opinion; it is epistemically subjective.
Underlying this epistemological distinction between types of claims is an ontological distinction between modes of existence. Some entities have an existence that does not depend on being experienced (mountains, molecules and tectonic plates are good examples.) Some entities exist only so far as they are experienced (pains, tickles and itches are examples.) This distinction is between the ontologically objective and the ontologically subjective. No matter how many machines may register an itch, it is not really an itch until somebody consciously feels it: it is ontologically subjective.
This seems quite useful and is worth keeping in mind. But I feel there are blurry lines that need to be acknowledged. Let’s look at what a mountain is. We can break that entity into a couple of “parts” – there’s the fact that the mountain is there in some objective sense (some people who question the very nature of reality might dispute this point) and then there is my observation of the mountain, my act of seeing* the mountain. The first part is objective, the second subjective. But let’s now look at an itch. An itch is similar to pain and caused by some minor degradation of your physical body. Maybe a bug bit you, maybe a wound is healing. The actual sensation of the itch is your sensory awareness of this degradation. So again, there are two components—the objective part (the biting bug or whatever it is) and the subjective (the itchy feeling.) My point being that a mountain and an itch are not all that different; they share these two components. An itch is really just a way of sensing the thing that attacked your skin.
* Sight is really the only sense that allows one to get a clear representation of a mountain. There are other objects, however, that one can use various senses to appreciate. A lasagna can be seen, smelled and tasted for example.
And going to the first quoted paragraph: We can say that it is objective to say that Rembrandt lived in Amsterdam, but is it really? It is dependent on us agreeing to the human convention that this particular place on earth is called Amsterdam and that this particular bundle of historical matter was called Rembrandt. For the statement to be true we (the observer of the statement) need to agree to various taxonomies. If I could get everyone to agree on some system by which we could judge art, I might very well be able to objectively claim that Rembrandt was the greatest Dutch painter. That really is the difference between the two statements: how may people agree on the terms. (There is near universal agreement on terms like “Rembrandt” and “Amsterdam, less so on “great painter.”)
It may seem I’m trying to be difficult here, but I’m merely pointing out how hard it is to really define these terms.
September 23rd, 2014 by Wil
I’ve mentioned that I’ve been getting back into my childhood pastime of drawing comic style art. Musing on this prompted the question: why do we draw? By this I mean: what is the subconscious motivation to spend hours penciling and inking away at various pieces of fantastic and mundane imagery?
It’s an impossible question to answer, but I feel on some level that we feel we take ownership of what we draw. If I draw a fast sports car—as teen boys have done on algebra books for years—I, in some weird way, own that car. If I draw a fantastic spaceship, I again own it. And if I draw a beautiful, buxom woman, I own her as well, even if I am an overweight, pimply dork, as most comic artists are. (To be clear: I am not an overweight, pimply dork. I am quite beautiful.)
I suppose it’s similar to why we write fiction. Most humans have little control over their lives—they can lose their jobs, lovers, friends in an instant. Their economic fortunes are dictated by impossible to understand market forces and governmental whims. They are lost in a violent sea. But they can write; they can create their own worlds and people and control them. That provides at least some small sense of autonomy.
September 16th, 2014 by Wil
Lately I’ve been getting back into drawing comic book style art. (I made my own mini comics as a kid.) It’s a lot of fun and I feel my skill is improving. But as I look at how art is produced in this modern age I find myself struck by a few concerns.
At one point after I started drawing again I looked into whether there was some cheap software that could render backgrounds such as room interiors or a rows of buildings. Such images are largely made up of basic shapes, I theorized, and shapes are basically a series of coordinates which a computer should have no problem rendering. I never really found an affordable program and ultimately decided I should learn to draw such images myself.
Nonetheless, a lot of computer art and animation is produced via that process: an artist defines a shape, often of great complexity, and the computer renders it. If one wants to change the color or texture of the shape, that’s easy enough. This process is far less laborious than the process of drawing or painting the shape.
And there are pluses to this process. It’s allowed for an explosion of 3d art and allowed people with limited conventional artistic skill to produce wonders. But here’s a point that nags at me: I could sit down and draw 10 red boxes. If I’m a decent enough artist, they’ll look pretty similar. But they will have differences. The pen strokes I use on any one box won’t match the strokes on any others. The boxes won’t match perfectly in size, even if I use a ruler. However, I can create 10 identical boxes in Photoshop or some similar program. It’s as easy as copy and paste. The uniqueness of the hand-drawn boxes is lost.
People make similar complaints in the world of music. I can sit down to a mic’d piano and play 10 instances of a C minor chord. None of them will sound exactly like the other because the pressure I use to play the piano keys will vary, the air in the room will settle in different ways causing the sound vibrations to be affected, and a number of other factors. On the other hand, I could sit down and play 10 chords on a midi synthesizer piano and they might end up sounding very similar. The piano sound on a synthesizer is a sample —-essentially a recording of a piano that took place in the past (when whoever was building the library sounds created it) and can’t really change. This is why people complain about the staticity of synthesized sounds. (In fact, sound engineers are making quite a lot of progress in getting variation in sound samples but it’s still not on par with the real thing.)
Having said all that, I’m quite glad sound libraries exist. I’ve recorded fairly complex works using synthesized symphonic instruments that I would never have been able to attempted in the analog (non-digital) days. (Because the world’s symphonies are not clamoring to play my work—the fools!)
But there’s something disturbing about the advent of so much art being exact duplicates of other pieces of art. The ease of use offered by computers is great for the art of neophytes (and in the realm of drawing that is what I consider myself) but it cheapens the work of pros. “Oh, that awesome looking hand you spent 10 hours painting? Look, I can generate something just as cool in 20 seconds on the computer!”
Of course we’ve seen this before. It used to be the only way to get a chair was to have a craftsman make one by hand. Now IKEA pumps them out by the millions. Such is progress. (Note how I spit that last word out.)
September 12th, 2014 by Wil
When I read Antonio Damasio’s neuroscience book “Decartes’ Error” several years ago I was struck with a certain revelation. Damasio made the point that the subtle fluctuations of our emotional life are tied to physiological processes. A sting a fear is correlated to the activation of the amygdala and corresponding hormone releases. Similar processes drive other emotions and sensations like serenity, sleepiness and ecstasy. They are fired off by the release of chemicals within our bodies.
It’s fascinating to think about this but you don’t see the concept mentioned much in non-science writing. Thus I was pleasantly surprised at this passage on author Hugh Howey’s blog. While discussing Barnes and Nobel’s use of “loyalty cards” (basically a tool to provide discounts to frequent buyers) he states…
Loyalty cards are another issue. These cost a yearly subscription, and being asked if you have one right at the moment of transactional copulation is a buzz-kill. Dreading the pressure of signing up is a great way to block the dopamine release that might get me to come back.
Dopamin is the neurotransmitter related to anticipation.
I have to say I agree with Howey’s larger point about these kinds of membership programs. I’ve pretty given up on ever shopping at a Von’s Supermarket as every time I do they try and force membership into their stupid loyalty card program. I’m curious whether these programs ultimately drive away more customers than they keep. Albertsons, another store around here, has actually discontinued their program.
September 9th, 2014 by Wil
I recently came across discussion of the fact that, in medieval times, animals were often put on trial for various crimes. This web page describes such occurrences in detail and includes this delightfully grisly anecdote.
Such was the case on June 14, 1494, when a pig was arrested for having “strangled and defaced a young child in its cradle, the son of Jehan Lenfant, a cowherd on the fee-farm of Clermont, and of Gillon his wife.” During the trial, several witnesses explained that “on the morning of Easter Day, as the father was guarding the cattle and his wife Gillon was absent in the village of Dizy, the infant being left alone in its cradle, the said pig entered during the said time the said house and disfigured and ate the face and neck of the said child, which, in consequence of the bites and defacements inflicted by the said pig, departed this life.”
After listening to the evidence, the judge read out his verdict: “We, in detestation and horror of the said crime, and to the end that an example may be made and justice maintained, have said, judged, sentenced, pronounced and appointed, that the said porker, now detained as a prisoner and confined in the said abbey, shall be by the master of high works hanged and strangled on a gibbet of wood near and adjoining to the gallows and high place of execution …”
The pig ate a face! I feel a little less guilty about eating bacon.
September 1st, 2014 by Wil
As astute readers are doubtless aware, I recently posted a post entitled “What is Real?” in which I posited that most everything we encounter does not exist. My point wasn’t that physical matter doesn’t exist (though maybe it doesn’t) but that the objects we group matter into are not objectively real but are formed from subjective, man-made categories. So the atoms and molecules* that make up a cup, or a cat, or a hat exist, but the objects—cups, cats, and hats—are dependent on humans for their existence.
*Of course, “atoms” and “molecules” are themselves man made categories.
Todays I stumbled unto an interesting rumination by someone named Alan Lightman. He’s titled his piece “My Own Personal Nothingness” and uses it to explore this “nothing is real” conceit. At one point he argues that institutions—churches, organizations, governmental bodies, political parties—are not real. They, as everything else, exist only in the minds of men.
Likewise, our human-made institutions. We endow our art and our cultures and our codes of ethics and our laws with a grand and everlasting existence. We give these institutions an authority that extends far beyond ourselves. But in fact, all of these are constructions of our minds. That is, these institutions and codes and their imputed meanings are all consequences of exchanges between neurons, which in turn are simply material atoms. They are all mental constructions. They have no reality other than that which we give them, individually and collectively.
This might seem an innocuous, even boring statement but, as I’ve argued elsewhere, this has huge ramifications for morality. In modern society, if you commit an evil act, you are (hopefully) caught and placed in prison. But if our institutions of law and morality are mere figments of our imagination, how do we know how to objectively separate right from wrong?
But there’s also something a bit freeing about the notion that institutions are meaningless. It means that we really don’t owe institutions any fealty, that we shouldn’t concern ourselves with them. I know a lot of people who defer greatly to their political party (and roundly condemn anyone who doesn’t support their party) and when a politician of that party is caught in a wrongdoing—say, fornicating with a goat—it pains these people. But these institutions only have the power we give them. I used to be much more deferential to academics or doctors and the institutions they represent, but I now see they’re taking guesses about how life works, just like the rest of us. (I don’t want to overplay this statement—I defer to doctors more than astrologers, but I recognize they aren’t perfect.)
Anyway, Lightman’s short piece is worth reading as are the comments it has generated.
August 31st, 2014 by Wil
Stanley Jordan is an interesting guitarist who first appeared on the jazz scene decades ago—early eighties I think. He made a splash with an interesting technique for playing as one plays a piano—he used both hands to tap notes on the fretboard. It was similar to Van Halen’s two handed tapping but its own kind of monster. I’ve owned a few of albums of Jordan’s and saw him live once and he’s very impressive.
I stumbled across this recent interview with Jordan. It caught my attention partly because Jordan clearly is a hyper intelligent fellow with a lot of diverse interests. But also, it’s pretty clear that he’s openly acknowledging being gay or transgendered or some combination thereof. (He doesn’t actually say this, but his appearance, affectation and shots of him performing in more flamboyant attire would seem to make it clear.) I find myself wondering whether the fact that he created a very unique a revolutionary guitar style is in some way related to the fact that he’s not tied down to a traditional sense of self. Like, on some level he’s innately so outside the box (gender-role-wise) that he feels free to throw the box out the window (in terms of his playing.)
I could be totally wrong about this, I suppose, but compare images of Jordan in years past with as he appears today I think you’ll see what I’m talking about.
The interviewer seems like a dullard and early on confuses the word “chorus” for “chords.”
August 28th, 2014 by Wil
I was thinking the other day about the topic of musical dissonance. Dissonance is a somewhat relative term—some people hear a piece of music and consider it sharply dissonant, others less so—but there’s some general agreement. Few would argue that there’s not a lot of dissonance in Jerry Goldsmith’s Planet of the Apes soundtrack.
I know some people who are really averse to musical dissonance. I know others, like myself, who don’t find dissonance particularly perturbing. It struck me that a lot of the people I know who dislike dissonance tend to be clean freaks – they’re unusually repulsed by bugs, filth and such. I wonder of there’s some correlation – is their distaste of dissonance (a kind of musical filth) related to their fear of general filth?
There’s some research into the neuroscience of all this. I found this essay online that synopsizes some of it.
A recent experiment dealt with this problem by attempting to minimize subjectivity, by measuring responses to dissonance. (1) Dissonance can consistently create feelings of unpleasantness in a subject, even if the subject has never heard the music before. Music of varying dissonance was played for the subjects, while their cerebral blood flow was measured. Increased blood flow in a specific area of the brain corresponded with increased activity. It was found that the varying degrees of dissonance caused increased activity in the paralimbic regions of the brain, which are associated with emotional processes.
Another recent experiment measured the activity in the brain while subjects were played previously-chosen musical pieces which created feelings of intense pleasure for them. (2) The musical pieces had an intrinsic emotional value for the subjects, and no memories or other associations attached to them. Activity was seen in the reward/motivation, emotion, and arousal areas of the brain. This result was interesting partly because these areas are associated with the pleasure induced by food, sex, and drugs of abuse, which would imply a connection between such pleasure and the pleasure induced by music.
BTW – here’s that Planet of the Apes. Brilliant stuff – I love the weird percussion bit around 6:35.
August 26th, 2014 by Wil
Science writer Nicholas Wade recently wrote a book about the role of race in the development of human culture. According to his thesis, the different races possess more or less of certain collections of genes and some of these genes are responsible for human behavior therefore certain races are genetically predisposed towards certain behaviors*. This is controversial because it implies that in some sense, races can’t change and efforts to help them do so may be doomed to failure.
* Summarizing what Wade is saying is close to impossible and I’m sure people could niggle with what I’m saying here but I feel it’s close enough for this discussion.
Many people disagree with Wade’s theory and one frequent rebuttal is that race itself doesn’t exist—it is, they frequently say, a “social construct.” By this they mean, the division of race has no real meaning in nature. For example, the term species divides animal groups who can’t reproduce with each other. In that sense, species is a real term. But race is much harder to define. Different races (called sub-species by people who debate this stuff) can have sex with each. One might point out the differences of skin color and appearance between different races but that gets messy quickly. There are plenty of light skinned blacks or Asian looking Caucasians etc.
In this sense, I tend to agree that race is a social construct. But, as you think about it, so in pretty much everything. Words have meaning because enough of us got together and agreed they have meaning. If we didn’t all agree that a cup was a cup and that its purpose was to hold things to drink, it wouldn’t be a cup. If everyone on earth died then cups would no longer exist. They might exist in the sense that their matter would still exist (assuming the earth wasn’t destroyed or what have you) but as an object—a category—cups would be extinct. The definition of cups is a man made distinction which has no objective meaning.
(Of course, definitions are kind of blurry. Some people might look at a tall cup and claim it’s a flower vase. And we also hear about weird German words that have no translation in English.)
This reminds me of a few tidbits I’ve read in relation to Buddhist thought. There is a notion there that you can experience an object before you apply all the man-made definitions and correlations related to it. I suppose we all do this for a nanosecond before we mentally identify an object. For the briefest of moment, before you identify a cup, you experience it as some undefined thing. (This moment is so fast it’s questionable whether you can say we “experience it” but there you have it.)
Watching my dad and his wife, both in their 90s, I see a certain breakdown of this system of categories, this taxonomy, that we apply to everything around us. They might be baffled by what a fairly basic object is, or they might understand it but mislabel it; there’s a lot of calling things with words that rhyme with the real name—cup could become pup for example. I suppose this is what life was like when we were babies—everything was just a thing, and often we probably couldn’t even differentiate between things. A newspaper next to an apple next to a kitten was just a pile of “stuff” in our new minds.
This leads to an interesting point. What babies and demented people have in common are essentially brains that don’t categorize well. The neurons of their brains have limited connections (either because the connections haven’t formed yet as in the case of babies, or because they have deteriorated as in the case of older adults.) This would imply that the meaning we apply to the objects we encounter is literally wired into our brains. It’s the structure of our brains that applies meaning. From this one can presume that a brain structured differently would find different meanings in the world. (Say the brain of an autistic child. Or an alien. Or a sentient computer.)
It really leads to the question of “what is real.” Our words are not real. Our categories are not real. The only thing really real that I can see is the physical matter of the universe. Even the distinctions between these bits of matter (e.g. molecules, atoms, electrons, quarks etc.) are not really real.
This is heavy shit to think about. It’s giving me a headache.
August 24th, 2014 by Wil
You may have heard the recent allegation that saturated fat, long thought to be evil, is actually fine. (This NY Times op-ed has details.) Along with red wine, coffee and chocolate, saturated fat seems to be another substance that the medical and diet industries got wrong for years.
When the revised opinion of saturated fats hit the news, I passed it on to several people in conversation. They would usually say something like, “Oh, so it’s ok for me to eat pepperoni pizza?” I would have to warn them, “That food is high in salt and salt is still bad.”
Except, maybe not. Peruse this NY Times editorial.
The current average sodium consumption in the United States is about 3,400 milligrams per day. This is mostly ingested in processed foods and is equivalent to the amount of sodium in about 1 1/2 teaspoons of salt. Dietary guidelines endorsed by the federal government and leading medical groups recommend reducing the average to 2,300 milligrams for the general population and 1,500 for groups deemed at greater risk, like adults older than 50, African-Americans, people with high blood pressure and diabetics, among others.
There is considerable evidence that lowering sodium can reduce blood pressure, but there is scant evidence that reducing blood pressure from levels that are not clearly high will necessarily reduce the risk of heart attacks, strokes and death.
Previous studies have found little evidence to support those low recommended sodium targets. Now a large study by researchers at McMaster University in Ontario, Canada, which tracked more than 100,000 people from 17 countries on five continents, has found that the safest levels of sodium consumption are between 3,000 and 6,000 milligrams.
My dad is on a 1500 milligram a day limit. Should I be worried that it’s too low? Maybe.
Other studies have found that very low levels of sodium can disrupt biochemical systems that are essential to human health or trigger hormones that raise cardiovascular risks.
To be fair, as the article states, the science is not settled here. But given that the track record of the health nannies is becoming more and more dubious I think an extra slice of pizza is justifiable.