Archive for the 'Music' Category

What’s the definitive version of a song?

We are all familiar with the idea that there is a definitive version of some song. There have been various versions of “Hotel California” for example, but we would all agree that the definitive version is the original by the Eagles. Is the definitive version always the original? I would say usually this is the case but not always. Leonard Cohen’s “Hallelujah” is a great song but it might have been surpassed by Jeff Buckley’s version. Or let’s consider “Layla.” I would argue the classic Derek and the Dominoes version (Featuring Eric Clapton) is the definitive version but I would be open to argument that Clapton’s later acoustic version is a contender. Obviously this is all rather subjective.

Here’s some interesting observations. In the realm of rock and pop, the definitive version is almost always the original (or at least the most successful) version. But not so much with jazz or blues. What’s the definitive version of “Misty” or “Ain’t Misbehavin’” or “Sweet Home Chicago”? We all might have our favorites but we probably wouldn’t argue that our fave is the definitive one. This is even more true with classical music. Most of those songs were performed thousands of times before we even had recording technology. Who is to say that the definitive version of Bach’s first invention wasn’t performed in a Polish salon in 1854?

In a sense, recording has enabled us to capture elements of definitiveness that were not possible in earlier days. You could, for example, play “Hotel California” on a piano or accordion, but it is much more definitive to play it as it is on the album, on acoustic guitar, specifically a 12 string acoustic. The exact instrumentation is important. On the flip side, a version of “Misty” on piano seems no less definitive that one on guitar.

When I think of my experience as a musician I note that there are certain songs that everyone feels you need to play a certain way to really capture the essence of the tune. It’s felt that you need to play the riff exactly as it is on the album or make sure a particular vocal harmony is in there. This would be true with prog rock, new wave, certain kinds of pop. It’s much less true with jazz, blues and “looser” music genres.

I’m not sure what this all means but it’s interesting to think about.

How to innovate! (Don’t be too innovative.)

As a society, or species, (or whatever we are), we tend to laud forward thinking creative geniuses. When we find one, we hoist them onto a pedestal and treat them as an (to quote this article) “Übermensch [who] stands apart from the masses, pulling them forcibly into a future that they can neither understand nor appreciate.” This is true across all disciplines. Think of Einstein, Beethoven, Picasso, on and on.

So how does one become a genius? Clearly you have to innovate, to do something no one else has done. But there’s a catch here. You can’t be too innovative. You can’t be so ahead of the curve that nobody can really grasp what you’re saying or doing.

Let me propose a thought experiment. Jimi Hendrix travels to Vienna around 1750 and plays his music. Would he be lauded as a genius? Would his guitar playing be heard as the obvious evolution of current trends in music? No, he’d probably be regarded as an idiot making hideous noise and he might be burned at the stake.

But, let music evolve for around 220 years and yes, Jimi is rightfully regarded as a genius. His sound was made palatable by those who came before him, mainly electric blues guitar players of the 50s and 60s. (Obviously there are a lot of other factors (like race and class and sex) relevant to whom gets crowned a genius but I’m painting in broad strokes here.)

So the trick to being a genius is to be ahead of your time but not too ahead. The world of science and medicine is filled with examples. Gregor Mendel famously discovered that physical traits could be passed from one generation of life to another. In what was a major breakthrough in our understanding of biology, he theorized what we came to call genes. He published his results and was met with pretty much total indifference. It wasn’t until his work was rediscovered decades later that we applied them. Mendel was too ahead of his time.

The book “The Mind’s I” notes the mathematician Giovanni Girolamo Saccheri who contributed to the discovery of non-Euclidian geometry. His ideas were so controversial that even Saccheri himself rejected them! (At least he did according to the book; there seems some debate on this. See the last graph on the Saccheri wiki page.) Talk about being too ahead of your time.

But perhaps the best example of this sort of thing is Ignaz Semmelweis. The Hungarian physician…

…discovered that the incidence of puerperal fever could be drastically cut by the use of hand disinfection in obstetrical clinics.

That’s right, he basically came up with the crazy idea that doctors should wash their hands after touching sick people. Unfortunately…

Despite various publications of results where hand-washing reduced mortality to below 1%, Semmelweis’s observations conflicted with the established scientific and medical opinions of the time and his ideas were rejected by the medical community. Some doctors were offended at the suggestion that they should wash their hands and Semmelweis could offer no acceptable scientific explanation for his findings. Semmelweis’s practice earned widespread acceptance only years after his death, when Louis Pasteur confirmed the germ theory and Joseph Lister, acting on the French microbiologist’s research, practiced and operated, using hygienic methods, with great success.

Oh well. Semmelweis probably still had a great career and life right?

Umm, no.

In 1865, Semmelweis was committed to an asylum, where he died at age 47 after being beaten by the guards, only 14 days after he was committed.

Don’t be too ahead of the curve folks.

The sound of language (and music)

A while back I described John Searle’s Chinese Prisoner dilemma which purports to show that computers will never be able to understand meaning as humans do. He argues that computers can trade in the syntax of a statement but not the semantics. So you could present a computer with “2 + 2 = ?” and it would answer “4″, but it would not understand the meaning behind those symbols. (You can read the post for more info on Searle’s idea, or, you know, google it.)

It happens that the book I’m reading, “The Mind’s I” contains the original text of Searle’s argument and the authors’ rebuttal of it. But I’m more interested in some specific commentary on the nature of learning a language. The authors point out that to successfully learn a language you need to hear not the sounds but the meaning. For people who learn a new language…

The sounds of the second language pretty soon become “unheard”—you hear right through them, rather than hearing them… Of course you can make yourself hear a familiar language as pure uninterpreted sound if you try very hard… but you can’t have your cake and eat it too—you can’t hear the sounds both with and without their meanings. And so most of the time people hear mainly meaning.

I was a little perplexed by this point. I am very familiar with english and I don’t don’t think I could force myself to not hear the meaning behind a word. If I hear “dog” I immediately think of a cute, furry creature with a tail; I can’t just hear the phonemes as sounds devoid of meaning. However, I think the authors are really referring to languages we may be familiar with but not masters of: second and third languages. I should try this experiment with both French and German which I dabble in. As it is, I do have a sense that when I hear a foreign word there’s a weird transformation process. If I hear “hund” (German) I think, “hund” = “dog” = a furry canine type creature. I don’t literally think this of course, but I have to translate the word to the english word and then to the concept it represents. Clearly, to speak well, I need to eliminate the middle step and go right from the foreign word to the concept.

So, learning a language is really just getting to the point where the meaning of the word or phrase comes jumping into your consciousness immediately upon hearing/seeing it. This mental word processing needs to be automatic and unconscious. A bit like learning to ride a bike I suppose. At first you have to think about it: move this leg then that leg, holds hands this way, torque body for balance etc. But eventually you don’t think about it at all.

A point that “The Mind’s I” touches on is that we also have meaning for music, though its much harder to define. You might hear a piece of music and “think”, “This is Led Zeppelin’s ‘Black Dog’. Those boys were a crazy hard drinking band from the 70s who epitomized rock and roll excess. I lost my virginity to this song*.” Again, the processing of all this is unconscious. You don’t “do” it, you “experience” it. (And it should be noted that, unlike words, the information evoked when you hear Zep’s ‘Black Dog’ may be far different from mine.)

* I didn’t actually; in fact no music was playing when I lost my virginity. But you might have.

John Lennon, scumbag?

I’ve always been a bit ambiguous about John Lennon. I generally prefer Paul’s contributions to the Beatles (though I concede John’s work, especially post-Beatles, had more gravitas.) In interviews and whatnot Lennon often comes across as a pretentious twat.

Today I stumbled across this post which savages from Lennon from a feminist perspective. Some of the complaints I find rather forced (Lennon’s “appropriation of Indian music and culture”? Seriously?), but there’s a lot I did not know here. In particular, Lennon almost beat a man to death. (Quote below from this site, linked off the aforementioned blog.)

Wooler was a very close friend of the Beatles and had introduced them on stage some 300 times. This incident happened at Paul’s 21st birthday party, on June 18, 1963. At the party, Wooler was joking around with John and said (with heavy gay intimations): “Come on John, what really happened with you and Brian? Everybody knows anyway, so tell us.”

John had been heavily drinking that night and Lennon was a notorious “bad drunk”. In a blind rage, John proceeded to beat the stuffing out of a very surprised Bob Wooler, literally kicking him repeatedly in the ribs as he lay on the ground in a bloody heap.

According to John, the only reason he actually stopped the savage beating was because, “I realized I was actually going to kill him… I just saw it like a screen. If I hit him once more, that’s really going to be it. I really got shocked and for the first time thought: ‘I can kill this guy.’”

Also worth reading (though sadly under-sourced): 10 Unpleasant Facts About John Lennon.

Where are Photoshop filters for music?

Most people are somewhat familiar with Photoshop, the image editing program that has turbo charged the graphic design industry. And I think most people are generally aware that Photoshop has what are called “filters”—tools with which one can take an ordinary photograph and turn it into a blurry, Monet style painting, or a pointillist masterpiece, or a piece of pop art a la Lichtenstein. Here, for example, is an example of a filter that gives an image a black and white comic book effect.

How these filters work is a bit beyond me but one can assume the processes are tailor made for computation. To consider one example, if we realize that a color’s saturation level can be assigned a number, we then realize that to make an image desaturated (in the style of a water color painting) we could create a computational rule like: for each pixel with a saturation value higher than [some threshold number], set that pixel’s saturation to -20 below its current value. Repeat until it’s below the threshold.

So computation and filters have radically affected what’s possible with visual art. It struck me today, why hasn’t this happened with music?

To some degree it has. With MIDI manipulation software, it’s quite easy to swap one synth sound out for another—to make what was initially a trumpet sound like a xylophone, for example. (I find in practice, however, it’s not quite so simple, as how you play a part is dependent in the timbre feedback you get while you play it. When swapping instruments I sometimes have to redo the part.)

You can also easily modulate a piece of music to a new key, so that a piece written in the key of A# can be moved up to C.

But it strikes me that there’s a number of other ways one could use computation tools to make music creation easier. All of the following commands are the kinds of things I would like to be able to request in a program such as Garage Band. I’m using musical terms here that may not be familiar to non-musicians, but I’ll try to keep it simple.

  • Take all the block chords in the selection and turn them into 8th note arpeggios.
  • Harmonize this melody line in thirds.
  • Take my harmony and render it in the style of a ragtime piano. (I think this actually can be accomplished via the software “Band in a Box.”)
  • Take all the instances of a minor chord that precedes a chord a fourth away and change them into dominant 7th chords. (This would have the effect of “jazzing up” the sound of a song. I vi ii V7 would become I VI7 II7 V&.)

These are all “surface level” examples – I can think of plenty of filter ideas that would apply on a more granular level.

My point being that this sort of thing is eminently possible; indeed, it has been for years. Maybe it’s out there and I’m unaware of it, but that would surprise me.

This would of course make the production of music much* easier, and enable the exploration of creative ideas with much less effort. That said, it’s valid concern that this might make the world of music much worse, creating an mob of middling Mozarts who could render listenable but fundamentally undisciplined music. (I would likely fall into this group.) It’s reasonable to argue that the path to compositional virtuosity should require a degree of effort to travel. But these concerns are exactly the sort of thing I think we’re going to be confronting soon enough anyway.

*I originally mistyped this word as “mush” which is ironic since such software tools might result in a lot of musical mush.

Becoming music

Towards the end of Jaron Lanier’s book “You Are Not a Gadget” he talks of the technology that he’s most famous for: virtual reality. He worked on it quite a while ago, back when VR essentially consisted of strapping on goggles and seeing a wireframe alternate reality. (A bit like early first person shooter video games I suppose.) But while experimenting with VR Lanier noticed something interesting—he could redefine his body in the VR world in some dramatic way, but it wasn’t difficult for his brain to master this new form. In the book, he describes one of his hands becoming very big and, at another time, adding on extra mini-arms and becoming a VR human lobster. He describes this a bit here:

I had the experience of my arm suddenly becoming very large, because of a glitch in the software, and yet still being able to pick things up, even though my body was different. And that sensation of being able to alter your body is different from anything else. I mean, it’s almost like a whole new theater of human experience opens up.

I find this notion that the human brain can easily adapt to new body formation interesting but not all that surprising. The neuroscientist Miguel Nicolellis, who’s working on connecting paralyzed people to robotic body parts, talked quite a bit about this in his book “Beyond Boundaries.” He’s connected monkeys to fake legs and the monkeys “get it” pretty quickly. It’s another example of the plasticity of the brain.

So now I’m going to get a little out there. On Thanksgiving I was lying around listening to some music (Grieg, not that it really matters.) and I started thinking. Lanier’s point seems to be that people can easily adapt to sensations coming from virtual body parts. They easily accept that “I’m feeling a sensation on my giant thumb” for example. As such, we should be able to easily inhabit new forms. But do those forms have to be physical, e.g. body shapes of some type? Can we enter forms made of of things like sound? Music, for example? While thinking about this, I made a basic effort to “become” the music I was listening to. And I did have a certain oddball glimmer of a sense that my interpretation of the music changed from something I was listening to to something that I was. Like a part of my consciousness entered the “shape” of the music.

This was not a mind bending experience; it was merely a slight change of how I viewed the world, not unlike saying, “I wonder what it would like to be that guy over there” (except in this case the guy was music.) And I recognize that I don’t totally understand the experience and probably couldn’t capture it into words even if I did. (It does remind me of the time I “became” water.)

Maybe it works something like this: we tickle our arm and we understand that this sensory experience is something that we are doing to ourselves. But when we listen to music or see a plane crash into a mountain, we understand that to be something we have no control of— events caused by a foreign entity (e.g. the cd player playing the music or the idiot pilot.) But what if we change our conception to be that the cause of these events is us (in the same way Lanier in his virtual environment accepted the giant hand to be his)? Do we not then, in some way, become something we are not?

The sounds of nature

It’s noted that humans generally like melodies that move with what we call stepwise motion. Basically this just means we like melodies that go from one note to a note next to or close to it. We like it when a C note goes to a D note or an E note, but we aren’t crazy about wider leaps like, say, C to an F over an octave away. And we definitely don’t like a barrage of crazy leaps – that has the “cat walking on a piano” sound.

There are exceptions to this rule of course. A lot of jazz and modern classical music does engage in such wild melodic leps. But that music isn’t particularly popular with the public at large—it tends to be thought of as intellectual music. I doubt you could find a single popular song in history that uses many wide melodic leaps in a melody (the possible exception might be a novelty song of some sort, probably about robots.)

Why is this? One could argue that such big leaps are difficult to play. There’s some truth to that. On almost any instrument it’s easier to run up a scale than to leap about the melodic range of an instrument.

But I suspect something else is at work. An argument can be made that early man evolved to find beauty in the sounds he heard around him. And in nature there are very few examples of sounds leaping around melodically. Most animal calls are fairly stepwise (though there are some wild bird calls out there.) The vibrating sounds of wind and waterfalls tend to fluctuate subtly. They sounds of nature are like hills and valleys much more so than sharp cliffs.

Thus I have spoken.

Lou Reed

So Lou Reed just died. I’ve never been a big fan of his; he’s sort of associated with punk rock which is a genre of music I find tedious (though Reed’s proto punk is quite removed from groups like The Sex Pistols or D.R.I.)

However, a few years ago I discovered Reed’s album “Growing up in Public” and I loved it. It has tons of great songs, including this one (which has a certain prescience being that Reed died from complications from a liver replacement.)

Here’s the title track from the album which is also great. Listen to that great cello-esque bass line.

Music only computers can write

A while back I was reading David Cope’s book on the idea of computer created music. Cope is using computers to create music that sounds like it was written by humans, and that’s a laudable, worthwhile goal. But I find myself wondering if the real value of computer music would be creating music only computers could write. Could computers create music that would tax the compositional abilities of mere humans?

What would this music be like? I suppose really long pieces – songs that go on for hours or days might be an example. So too could music that requires an incredible attention to detail, like music with very precise rules about how notes vary their frequency. Or perhaps a kind of variations-on-a-theme process that could generate endless variations on a melody.

Such music might be interesting, but I’ll grant you it might be quite boring; some people these days can barely pay attention to a ten minute song, much less a day long one. I think this music would be probably fall into the category of “furniture music”; music meant to be in the background.

Today I came across the website electricsheep.org. This site has a collection of computer created artwork (non-representational art) generated by computers. The computers use a genetic algorithm which is essentially a software process that duplicates the process living creatures go through to evolve. As I understand it, genetic algorithms introduce mutations into output and if those mutations are beneficial, they are adapted as traits. If the mutations are not beneficial it drops them.

I know that’s a bit hard to understand so let me explain with a practical example. Let’s say the electric sheep computer(s) start(s) off with a big red circle. Some possible variations could be “Make the circle blue,” “make the circle more square,” “fill in the circle with polka dots” etc. The traits that succeed get added to the artwork’s “DNA” and carry forth into new generations with additional mutations. But what constitutes the idea of success in this realm? People rank the artwork on the site. Top ranked artwork is more successful that lower ranked art.

Could such a process be applied to computer music? What’s electric sheep doing exactly? It’s taking shapes – circles, squares, grids, cloud type shapes, and changing them. Could something similar happen with music? First we’d have to find some corollaries to shapes in music. This could be chords, melodies, maybe even rhythms. Could a genetic algorithm be applied that morphed these music elements, and then tracked listener preferences, adopting the high scoring mutations into the music’s DNA? That’s the kind of music only a computer could compose.

One final point here. I suspect one other area worth exploring in computer generated music would be microtonal music. This is…

…music using microtones—intervals of less than an equally spaced semitone. Microtonal music can also refer to music which uses intervals not found in the Western system of 12 equal intervals to the octave.

Manipulating music on that level seems like something computers would be good for.

The plot wheel and random idea generators

Erle Stanley Gardner is the author famous for creating Perry Mason. He was also noted for his prolific output; he wrote 82 Perry Mason novels in his career! How did he do it? By using the plot wheel. (Demo of the wheel at the link.)

Key to Gardner’s remarkable output was his use of the plot wheels invented and patented by another of his successors, a British crime novelist named Edgar Wallace. By using different combinations of possible twists and turns for both major and minor characters, Gardner was able to construct narratives that held his readers rapt for several decades.

Crime fiction web site The Kill Zone elucidates…

When Gardner kept getting rejection slips that said “plot too thin,” he knew he had to learn how to do it. After much study he said he “began to realize that a story plot was composed of component parts, just as an automobile is.” He began to build stories, not just make them up on the fly. He made a list of parts and turned those into “plot wheels” which was a way of coming up with innumerable combinations. He was able, with this system, to come up with a complete story idea in thirty seconds.

I’ve been intrigued enough by the concept of a random plot generator to start work on a very basic music idea generator. It doesn’t actually write music; it’s merely a list of ways to accompany or dress up a basic tune (for example, by harmonizing a melody in thirds, or applying Bach style counterpoint to the melody.) I’m not randomly generating options though I might try and add that component later (though I would certainly use my discretion in choosing whether to follow the options it produces.)

But why would one want a plot generator or a music idea generator? Why not use the wonderful tool of human creativity? Mainly to overcome a problem that’s all to prevalent these days, the problem of too many options. When constructing a plot it’s very easy to say, “Our hero goes to Istanbul, no wait, Marrakech, no, Tripoli, and there he finds a golden sword, no wait, a magic coffee cup, no, wait, a mystical ashtray and then he…” You get the picture. Stories can suffer analysis paralysis if you can’t cordon your options in. The same goes with music and probably all creative processes. If we had all the time in the world then we could explore all the possibilities, but we seldom do.

The challenge of the “too many options” situation is that you have to know what to throw away. A plot wheel, or my proposed more advanced music idea generator basically uses chance to make these decisions. (A bit like John Cage’s chance derived music.) This isn’t a bad way to get the ball rolling though it probably results in somewhat hokey, discombobulated output. But if you want to knock something out, or are at a standstill, it’s a legitimate option.

This approach isn’t limited to creative processes, by the way. I used to go to movie rental stores and walk the aisles for close to a hour looking for the perfect movie. I probably would have been better off going to a section I liked (horror or independent cinema), throwing a dart and taking whatever it landed on.

My sense is that in this ever expanding world of choices – of 300 channel television, of a world of entertaining web pages (none more so than acid logic), of cheap travel, of Spotify and its collection of 300 trillion cds (I’m making that number up), of internet dating sites with hundreds of profiles etc. etc. – the problem of how to choose has become more daunting. A lot of technology evangelists say, “more choices are better,” but it many ways they are not. The process of choosing puts a heavy load on our brain. It literally tires us out. That’s why I feel choice shortcuts, like plot or music generators, have value.

This idea that to function efficiently one must eliminate unneeded information is not limited to people. The brain does the same thing. Here’s an interesting passage from Ray Kurzweil’s book “How to Create a Mind.”

[Vision scientists] showed that optic nerves carry ten to twelve output channels, each of which carries only a small amount of data about a given scene. One group of what are called ganglion cells sends information about edges (changes in contrast). Another group detects only large areas of uniform color, whereas a third group is sensitive only to the backgrounds behind figures of interest.

“Even though we think we see the world fully, what we are receiving is really just hints, edges in space and time,” says Werblin. “Those 12 pictures of the world constitute all the information we will ever have about what’s out there, and from those 12 pictures, which are so sparse, we reconstruct the richness of the visual world.

Kurzweil then notes…

This data reduction is what in the AI [artificial intelligence] field we call “sparse coding.” We have found in creating artificial systems that throwing most of the input information away and retaining only the most salient details provides superior results. Otherwise the limited ability to process information in a neocortex (biological or otherwise) gets overwhelmed.

So the brain has figured out how to allow passage of only essential information… to chose only the best channels from the 300 channel television, so to speak.