Author Archives: Wil

About Wil

A groovy guy thinking deep thoughts.

Short man’s vindication?

I’ve mentioned my conflicted feelings about the strange Bob Brozman situation. Brozman was a great acoustic guitar player who killed himself this past year. Laudatory eulogies came flowing forth from the web until allegations that Brozman was a child molester came out.

Last night I was listening to a Brozman track called “Short’s Man Vindication” in which he bemoaned his small stature. I couldn’t help but be reminded on a L.A. Times article on pedophilia I commented on.

Researchers have also determined that pedophiles are nearly an inch shorter on average than non-pedophiles and lag behind the average IQ by 10 points — discoveries that are consistent with developmental problems, whether before birth or in childhood.

Am I implying that all short people are pedophiles? Hardly (I would guess the number is more like 50%.) But I can’t help wonder if maybe Randy Newman was on to something.

What’s the definitive version of a song?

We are all familiar with the idea that there is a definitive version of some song. There have been various versions of “Hotel California” for example, but we would all agree that the definitive version is the original by the Eagles. Is the definitive version always the original? I would say usually this is the case but not always. Leonard Cohen’s “Hallelujah” is a great song but it might have been surpassed by Jeff Buckley’s version. Or let’s consider “Layla.” I would argue the classic Derek and the Dominoes version (Featuring Eric Clapton) is the definitive version but I would be open to argument that Clapton’s later acoustic version is a contender. Obviously this is all rather subjective.

Here’s some interesting observations. In the realm of rock and pop, the definitive version is almost always the original (or at least the most successful) version. But not so much with jazz or blues. What’s the definitive version of “Misty” or “Ain’t Misbehavin’” or “Sweet Home Chicago”? We all might have our favorites but we probably wouldn’t argue that our fave is the definitive one. This is even more true with classical music. Most of those songs were performed thousands of times before we even had recording technology. Who is to say that the definitive version of Bach’s first invention wasn’t performed in a Polish salon in 1854?

In a sense, recording has enabled us to capture elements of definitiveness that were not possible in earlier days. You could, for example, play “Hotel California” on a piano or accordion, but it is much more definitive to play it as it is on the album, on acoustic guitar, specifically a 12 string acoustic. The exact instrumentation is important. On the flip side, a version of “Misty” on piano seems no less definitive that one on guitar.

When I think of my experience as a musician I note that there are certain songs that everyone feels you need to play a certain way to really capture the essence of the tune. It’s felt that you need to play the riff exactly as it is on the album or make sure a particular vocal harmony is in there. This would be true with prog rock, new wave, certain kinds of pop. It’s much less true with jazz, blues and “looser” music genres.

I’m not sure what this all means but it’s interesting to think about.

Is worrying worth it?

There is an absolutely terrific article about one man’s battle with debilitating anxiety over at the Atlantic Monthly web site. Not only is it an example of thoroughly engaging long form journalism, it has a hilarious account of being struck by gastrointestinal bowel issues while staying on the Kennedy estate.

Deep in the article, the author raises the possibility that anxiety—some anxiety—is good.

An influential study conducted 100 years ago by two Harvard psychologists, Robert M. Yerkes and John Dillingham Dodson, laid the foundation for the idea that moderate levels of anxiety improve performance: too much anxiety, obviously, and performance is impaired, but too little anxiety also impairs performance. “Without anxiety, little would be accomplished,” David Barlow, the founder and director emeritus of the Center for Anxiety and Related Disorders at Boston University, has written.

The performance of athletes, entertainers, executives, artisans, and students would suffer; creativity would diminish; crops might not be planted. And we would all achieve that idyllic state long sought after in our fast-paced society of whiling away our lives under a shade tree. This would be as deadly for the species as nuclear war.

I’ve seen this point made before and it makes intuitive sense. If we, as a species evolving through time, were unconcerned with worry we never would have survived. And even in the present day we need some level of alertness to get things done. This is the advantage experienced by the college student who waits until the last minute to write a term paper; the resulting anxiety of the moment sharpens his or her thinking.

The article continues:

Historical evidence suggests that anxiety can be allied to artistic and creative genius. The literary gifts of Emily Dickinson, for example, were inextricably bound up with her reclusiveness, which some say was a product of anxiety. (She was completely housebound after age 40.) Franz Kafka yoked his neurotic sensibility to his artistic sensibility; Woody Allen has done the same. Jerome Kagan, an eminent Harvard psychologist who has spent more than 50 years studying human temperament, argues that T. S. Eliot’s anxiety and “high reactive” physiology helped make him a great poet. Eliot was, Kagan observes, a “shy, cautious, sensitive child”—but because he also had a supportive family, good schooling, and “unusual verbal abilities,” Eliot was able to “exploit his temperamental preference for an introverted, solitary life.”

Perhaps most famously, Marcel Proust transmuted his neurotic sensibility into art. Proust’s father, Adrien, was a physician with a strong interest in nervous health and a co-author of an influential book called The Hygiene of the Neurasthenic. Marcel read his father’s book, as well as books by many of the other leading nerve doctors of his day, and incorporated their work into his; his fiction and nonfiction are “saturated with the vocabulary of nervous dysfunction,” as one historian has put it. For Proust, refinement of artistic sensibility was directly tied to a nervous disposition

But one can bring up a rebuttal here. So anxiety creates good art? So what? Is being an artistic virtuoso worth being a nervous wreck for your entire life? (For that matter is it worth being a good accountant, software engineer, sales rep or much of anything?) Robert Sapolsky has talked about how our fight or flight mechanism—originally used to protect us from attacking lions—is now used to make us worry about catching the subway on time. Shouldn’t we, in our modern, much safer society, be worrying less—much less?

I see the grand Catch-22 here. Worry too much and you’re miserable. Worry too little and you’re dead. However. I think society is tilted too far in favor of the former.

Google profits from piracy?

For a while now I’ve been arguing that the digitization of content is leading to the devaluation of content. This has certainly affected the news industry (see dying newspapers and magazines) and the music industry (see the loss of revenue and the general sense that music is free.) I’ve long felt the movie industry will become a partial victim to this trend; the main thing sparing it right now is that files of burned movies are still pretty big in terms of data and thus unwieldy to upload/download etc.

That said I’ve been noticing a lot of pirated movies over at Youtube. I’ve even been passing some evenings checking them out; there’s a lot of great classic and independent horror flicks there. (I wrote about some of them here.)

The ethical problem is, of course, that these movies are pirated and the people who produced and contributed to the films see not one dime from their films being shown on Youtube. I am fortunately free of ethics but other folks may be bothered by this. You often find these “information should be free” types tying pretzels of logic to defend piracy.

One thing that I am struck by is the fact that Google—the owner of Youtube—is selling advertisements that run before the movies. So, not only are they looking the other way to movie piracy, they are profiting from it. I was a little curious about this state of affairs and tracked down this HuffPo article which examines it. As you might surmise, Google is protected because they are not legally responsible for what their users upload (which, I admit, is probably a fair position.) Nonetheless, this development clearly does not encourage the production of cinema, especially independent cinema.

The article states…

…according to judgments in the YouTube v Viacom case, the DMCA provides YouTube with protection against copyright infringements carried out by its users. YouTube is not under an obligation to ensure that the service it provides complies with copyright law. Instead, copyright owners must report infringement to YouTube. So rather than YouTube creating a clever piece of software, or employing a small team of compliance officers, studios and producers all over the world have to duplicate effort and cost to monitor whether their titles are being uploaded. It doesn’t look like this inefficient method of policing copyright is going to change any time soon, but things get really interesting when one looks at the money that is being made from illegal uploads. YouTube may not have a responsibility to ensure that the content it publishes complies with the law, but does that entitle it to derive revenue from illegally uploaded content?

Of related interest: this article alleging Google, Yahoo and Bing knowingly sell ads to spammers. So much for “do no evil.” But what are we going to do about it? Stop using Google? Of course not. We are their bitches.

R.I.P. Al Goldstein

Al Goldstein, publisher of famous porn mag “Screw” died recently. He had an epic life arc; at one point he was a millionaire from “Screw”, but…

By the mid-2000s, Goldstein was homeless. He once got a job as a restaurant greeter, only to lose it when the management discovered he was sleeping on the premises.
Even in that period he reveled in irreverent humor. In 2005, when he worked at a New York deli, he told the Washington Post, “I’ve gone from broads to bagels.” But there was also the ever-present anger. “Anyone who wishes ill on me should feel vindicated,” he told the New York Times in 2004, “because my life has turned into a total horror.”

What brought the duke of porn down? The ubiquitous availability of porn once the internet appeared. I wonder if that is a sign of things to come for other industries —music, books, movies—that find their content becoming more and more digitized?

I discussed Al’s strange, public access talk show “Midnight Blue” a while back.

The general focus of the show was interviews, usually with female porn stars, though eventually non-porn guests like O.J. Simpson, Arnold Schwarzenegger, Gilbert Gottfried and Debbie Harry made appearances. Al’s primary interest was that of sexual technique; he would often throw out provocative, curse laden queries along the lines of “How do you like your pussy licked?” or “Is there anything wrong with me fucking a chick with my nose?” The porn royalty sat in the hot seat—too jaded to be shocked at Al’s questions—and offered serious answers. (One of my favorite “MB” moments occurred when Al asked Carol Connors, the “forgotten actress of Deep Throat” (and mother of current mainstream film actress Thora Birch!) whether she would ever perform bestiality. “No,” Carol demurred, “But I love animals!”)

How to innovate! (Don’t be too innovative.)

As a society, or species, (or whatever we are), we tend to laud forward thinking creative geniuses. When we find one, we hoist them onto a pedestal and treat them as an (to quote this article) “Übermensch [who] stands apart from the masses, pulling them forcibly into a future that they can neither understand nor appreciate.” This is true across all disciplines. Think of Einstein, Beethoven, Picasso, on and on.

So how does one become a genius? Clearly you have to innovate, to do something no one else has done. But there’s a catch here. You can’t be too innovative. You can’t be so ahead of the curve that nobody can really grasp what you’re saying or doing.

Let me propose a thought experiment. Jimi Hendrix travels to Vienna around 1750 and plays his music. Would he be lauded as a genius? Would his guitar playing be heard as the obvious evolution of current trends in music? No, he’d probably be regarded as an idiot making hideous noise and he might be burned at the stake.

But, let music evolve for around 220 years and yes, Jimi is rightfully regarded as a genius. His sound was made palatable by those who came before him, mainly electric blues guitar players of the 50s and 60s. (Obviously there are a lot of other factors (like race and class and sex) relevant to whom gets crowned a genius but I’m painting in broad strokes here.)

So the trick to being a genius is to be ahead of your time but not too ahead. The world of science and medicine is filled with examples. Gregor Mendel famously discovered that physical traits could be passed from one generation of life to another. In what was a major breakthrough in our understanding of biology, he theorized what we came to call genes. He published his results and was met with pretty much total indifference. It wasn’t until his work was rediscovered decades later that we applied them. Mendel was too ahead of his time.

The book “The Mind’s I” notes the mathematician Giovanni Girolamo Saccheri who contributed to the discovery of non-Euclidian geometry. His ideas were so controversial that even Saccheri himself rejected them! (At least he did according to the book; there seems some debate on this. See the last graph on the Saccheri wiki page.) Talk about being too ahead of your time.

But perhaps the best example of this sort of thing is Ignaz Semmelweis. The Hungarian physician…

…discovered that the incidence of puerperal fever could be drastically cut by the use of hand disinfection in obstetrical clinics.

That’s right, he basically came up with the crazy idea that doctors should wash their hands after touching sick people. Unfortunately…

Despite various publications of results where hand-washing reduced mortality to below 1%, Semmelweis’s observations conflicted with the established scientific and medical opinions of the time and his ideas were rejected by the medical community. Some doctors were offended at the suggestion that they should wash their hands and Semmelweis could offer no acceptable scientific explanation for his findings. Semmelweis’s practice earned widespread acceptance only years after his death, when Louis Pasteur confirmed the germ theory and Joseph Lister, acting on the French microbiologist’s research, practiced and operated, using hygienic methods, with great success.

Oh well. Semmelweis probably still had a great career and life right?

Umm, no.

In 1865, Semmelweis was committed to an asylum, where he died at age 47 after being beaten by the guards, only 14 days after he was committed.

Don’t be too ahead of the curve folks.

The sound of language (and music)

A while back I described John Searle’s Chinese Prisoner dilemma which purports to show that computers will never be able to understand meaning as humans do. He argues that computers can trade in the syntax of a statement but not the semantics. So you could present a computer with “2 + 2 = ?” and it would answer “4”, but it would not understand the meaning behind those symbols. (You can read the post for more info on Searle’s idea, or, you know, google it.)

It happens that the book I’m reading, “The Mind’s I” contains the original text of Searle’s argument and the authors’ rebuttal of it. But I’m more interested in some specific commentary on the nature of learning a language. The authors point out that to successfully learn a language you need to hear not the sounds but the meaning. For people who learn a new language…

The sounds of the second language pretty soon become “unheard”—you hear right through them, rather than hearing them… Of course you can make yourself hear a familiar language as pure uninterpreted sound if you try very hard… but you can’t have your cake and eat it too—you can’t hear the sounds both with and without their meanings. And so most of the time people hear mainly meaning.

I was a little perplexed by this point. I am very familiar with english and I don’t don’t think I could force myself to not hear the meaning behind a word. If I hear “dog” I immediately think of a cute, furry creature with a tail; I can’t just hear the phonemes as sounds devoid of meaning. However, I think the authors are really referring to languages we may be familiar with but not masters of: second and third languages. I should try this experiment with both French and German which I dabble in. As it is, I do have a sense that when I hear a foreign word there’s a weird transformation process. If I hear “hund” (German) I think, “hund” = “dog” = a furry canine type creature. I don’t literally think this of course, but I have to translate the word to the english word and then to the concept it represents. Clearly, to speak well, I need to eliminate the middle step and go right from the foreign word to the concept.

So, learning a language is really just getting to the point where the meaning of the word or phrase comes jumping into your consciousness immediately upon hearing/seeing it. This mental word processing needs to be automatic and unconscious. A bit like learning to ride a bike I suppose. At first you have to think about it: move this leg then that leg, holds hands this way, torque body for balance etc. But eventually you don’t think about it at all.

A point that “The Mind’s I” touches on is that we also have meaning for music, though its much harder to define. You might hear a piece of music and “think”, “This is Led Zeppelin’s ‘Black Dog’. Those boys were a crazy hard drinking band from the 70s who epitomized rock and roll excess. I lost my virginity to this song*.” Again, the processing of all this is unconscious. You don’t “do” it, you “experience” it. (And it should be noted that, unlike words, the information evoked when you hear Zep’s ‘Black Dog’ may be far different from mine.)

* I didn’t actually; in fact no music was playing when I lost my virginity. But you might have.

Does life exist?

I’ve mentioned that I often find myself musing on an original thought only to find, after a month or so, that someone grabs attention by publishing the same idea, usually in some sort of “respected” journal or web site. A lesser man might be upset with the psychic theft of his ideas, but not me. I’m happy to provide my musings for the good of mankind.

Not long ago I was thinking about how we define the notion of life. For instance, we define a grasshopper as alive and a rock as not. But, the more you reduce living things to their tiny components, the more they appear similar to non-living things. All of us—living and dead—are made up of molecules which themselves are made up of atoms which can be broken down to quantum particles. If we are all made up of essentially the same stuff, why are some things alive and some dead?

You might say, “because living things move,” but of course so do remote controlled cars. And some non-living things don’t move for eons.

In a blog entitled “Why Life Does Not Really Exist” science writer Ferris Jabr takes this ball and runs with it, doing a much better job with the topic than I could. Ultimately here arrives here:

Why is defining life so frustratingly difficult? Why have scientists and philosophers failed for centuries to find a specific physical property or set of properties that clearly separates the living from the inanimate? Because such a property does not exist. Life is a concept that we invented. On the most fundamental level, all matter that exists is an arrangement of atoms and their constituent particles. These arrangements fall onto an immense spectrum of complexity, from a single hydrogen atom to something as intricate as a brain. In trying to define life, we have drawn a line at an arbitrary level of complexity and declared that everything above that border is alive and everything below it is not. In truth, this division does not exist outside the mind. There is no threshold at which a collection of atoms suddenly becomes alive, no categorical distinction between the living and inanimate, no Frankensteinian spark. We have failed to define life because there was never anything to define in the first place.

My sentiments exactly! But Jabr then fails to explore the dark questions this raises. Modern ethics and morality are all based on the assumption that life is something… a vital force, a soul, whatever. How do we then accommodate our moral concepts with the view that life is not real. Why is it wrong for me to roll a steamroller over a baby (e.g. a collection of molecules) but not a log (e.g. a collection of molecules)? These sorts of questions are, I think, going to be the difficult problems of the coming centuries.

You could accuse me of being willfully ignorant here. I don’t, of course, go through life equating people with rocks and logs. But I do ask why I don’t. Is the distinction an essentially meaningless (though, from an evolutionary perspective, useful) one built into the human mind? Or is there a real qualitative difference between the living and non-living?

HAPPY MONDAY!

Who gets the credit for our thoughts?

Lately I’ve found myself noticing a phenomenon I’ve probably mentioned here in the past: the way ideas seems to leap up out of the netherworlds of my mind into my conscious brain. This happens a lot while waking up. Some particular issue is bothering me, perhaps something work related or a problem with a song or piece of writing I’m working on, and the solution suddenly appears. I find I don’t “build” ideas in a step by step manner, but rather that they “pop up” often fully formed.

In Jonah Lehrer’s book “How We Decide” he advocated the “sleep on it” method of problem solving. Struggling with a problem is often ineffective he argued. You are better off taking a walk or doing something to distract your conscious mind. Let your subconscious work on the solution.

I’m reading “The Mind’s I” and it makes an interesting point related to all this.

Our conscious thoughts seem to come bubbling up from the subterranean caverns of our mind, images flood into our mind’s eye without our having any idea where they came from! Yet when we publish them, we expect that we—not our subconscious structures—will get credit for our thoughts. This dichotomy of the creative self into a conscious part and an unconscious part is one of the most disturbing aspects of trying to understand the mind. If—as was just asserted—our best ideas come burbling up as if from mysterious underground springs, then who really are we? Where does the creative spirit reside? Is it by an act of will that we create, or are we just automata made out of biological hardware, from birth until death fooling ourselves through idle chatter into thinking that we have “free will”? If we are fooling ourselves about these matters, then whom—or what—are we fooling?

Do we “deserve” credit for our accomplishments and ideas?

John Lennon, scumbag?

I’ve always been a bit ambiguous about John Lennon. I generally prefer Paul’s contributions to the Beatles (though I concede John’s work, especially post-Beatles, had more gravitas.) In interviews and whatnot Lennon often comes across as a pretentious twat.

Today I stumbled across this post which savages from Lennon from a feminist perspective. Some of the complaints I find rather forced (Lennon’s “appropriation of Indian music and culture”? Seriously?), but there’s a lot I did not know here. In particular, Lennon almost beat a man to death. (Quote below from this site, linked off the aforementioned blog.)

Wooler was a very close friend of the Beatles and had introduced them on stage some 300 times. This incident happened at Paul’s 21st birthday party, on June 18, 1963. At the party, Wooler was joking around with John and said (with heavy gay intimations): “Come on John, what really happened with you and Brian? Everybody knows anyway, so tell us.”

John had been heavily drinking that night and Lennon was a notorious “bad drunk”. In a blind rage, John proceeded to beat the stuffing out of a very surprised Bob Wooler, literally kicking him repeatedly in the ribs as he lay on the ground in a bloody heap.

According to John, the only reason he actually stopped the savage beating was because, “I realized I was actually going to kill him… I just saw it like a screen. If I hit him once more, that’s really going to be it. I really got shocked and for the first time thought: ‘I can kill this guy.’”

Also worth reading (though sadly under-sourced): 10 Unpleasant Facts About John Lennon.