Archive for the 'Music' Category
December 2nd, 2013 by Wil
Most people are somewhat familiar with Photoshop, the image editing program that has turbo charged the graphic design industry. And I think most people are generally aware that Photoshop has what are called “filters”—tools with which one can take an ordinary photograph and turn it into a blurry, Monet style painting, or a pointillist masterpiece, or a piece of pop art a la Lichtenstein. Here, for example, is an example of a filter that gives an image a black and white comic book effect.
How these filters work is a bit beyond me but one can assume the processes are tailor made for computation. To consider one example, if we realize that a color’s saturation level can be assigned a number, we then realize that to make an image desaturated (in the style of a water color painting) we could create a computational rule like: for each pixel with a saturation value higher than [some threshold number], set that pixel’s saturation to -20 below its current value. Repeat until it’s below the threshold.
So computation and filters have radically affected what’s possible with visual art. It struck me today, why hasn’t this happened with music?
To some degree it has. With MIDI manipulation software, it’s quite easy to swap one synth sound out for another—to make what was initially a trumpet sound like a xylophone, for example. (I find in practice, however, it’s not quite so simple, as how you play a part is dependent in the timbre feedback you get while you play it. When swapping instruments I sometimes have to redo the part.)
You can also easily modulate a piece of music to a new key, so that a piece written in the key of A# can be moved up to C.
But it strikes me that there’s a number of other ways one could use computation tools to make music creation easier. All of the following commands are the kinds of things I would like to be able to request in a program such as Garage Band. I’m using musical terms here that may not be familiar to non-musicians, but I’ll try to keep it simple.
- Take all the block chords in the selection and turn them into 8th note arpeggios.
- Harmonize this melody line in thirds.
- Take my harmony and render it in the style of a ragtime piano. (I think this actually can be accomplished via the software “Band in a Box.”)
- Take all the instances of a minor chord that precedes a chord a fourth away and change them into dominant 7th chords. (This would have the effect of “jazzing up” the sound of a song. I vi ii V7 would become I VI7 II7 V&.)
These are all “surface level” examples – I can think of plenty of filter ideas that would apply on a more granular level.
My point being that this sort of thing is eminently possible; indeed, it has been for years. Maybe it’s out there and I’m unaware of it, but that would surprise me.
This would of course make the production of music much* easier, and enable the exploration of creative ideas with much less effort. That said, it’s valid concern that this might make the world of music much worse, creating an mob of middling Mozarts who could render listenable but fundamentally undisciplined music. (I would likely fall into this group.) It’s reasonable to argue that the path to compositional virtuosity should require a degree of effort to travel. But these concerns are exactly the sort of thing I think we’re going to be confronting soon enough anyway.
*I originally mistyped this word as “mush” which is ironic since such software tools might result in a lot of musical mush.
December 1st, 2013 by Wil
Towards the end of Jaron Lanier’s book “You Are Not a Gadget” he talks of the technology that he’s most famous for: virtual reality. He worked on it quite a while ago, back when VR essentially consisted of strapping on goggles and seeing a wireframe alternate reality (a bit like early first person shooter video games I suppose.) But while experimenting with VR Lanier noticed something interesting—he could redefine his body in the VR world in some dramatic way, but it wasn’t difficult for his brain to master this new form. In the book, he describes one of his hands becoming very big and, at another time, adding on extra mini-arms and becoming a VR human lobster. He describes this a bit here:
I had the experience of my arm suddenly becoming very large, because of a glitch in the software, and yet still being able to pick things up, even though my body was different. And that sensation of being able to alter your body is different from anything else. I mean, it’s almost like a whole new theater of human experience opens up.
I find this notion that the human brain can easily adapt to new body formation interesting but not all that surprising. The neuroscientist Miguel Nicolellis, who’s working on connecting paralyzed people to robotic body parts, talked quite a bit about this in his book “Beyond Boundaries.” He’s connected monkeys to fake legs and the monkeys “get it” pretty quickly. It’s another example of the plasticity of the brain.
So now I’m going to get a little out there. On Thanksgiving I was lying around listening to some music (Grieg, not that it really matters.) and I started thinking. Lanier’s point seems to be that people can easily adapt to sensations coming from virtual body parts. They easily accept that “I’m feeling a sensation on my giant thumb” for example. As such, we should be able to easily inhabit new forms. But do those forms have to be physical, e.g. body shapes of some type? Can we enter forms made of of things like sound? Music, for example? While thinking about this, I made a basic effort to “become” the music I was listening to. And I did have a certain oddball glimmer of a sense that my interpretation of the music changed from something I was listening to to something that I was. Like a part of my consciousness entered the “shape” of the music.
This was not a mind bending experience; it was merely a slight change of how I viewed the world, not unlike saying, “I wonder what it would like to be that guy over there” (except in this case the guy was music.) And I recognize that I don’t totally understand the experience and probably couldn’t capture it into words even if I did. (It does remind me of the time I “became” water.)
Maybe it works something like this. We tickle our ear and we understand that this sensory experience is something that we are doing to ourselves. But we listen to music or see a plane crash into a mountain and we understand that to be something we have no control of, events caused by a foreign entity (e.g. the cd player playing the music or the idiot pilot.) But what if we change our conception to be that the cause of these events is us (in the same way Lanier in his virtual environment accepted the giant had to be his.) Do we not then, in some way, become something we are not?
November 21st, 2013 by Wil
It’s noted that humans generally like melodies that move with what we call stepwise motion. Basically this just means we like melodies that go from one note to a note next to or close to it. We like it when a C note goes to a D note or an E note, but we aren’t crazy about wider leaps like, say, C to an F over an octave away. And we definitely don’t like a barrage of crazy leaps – that has the “cat walking on a piano” sound.
There are exceptions to this rule of course. A lot of jazz and modern classical music does engage in such wild melodic leps. But that music isn’t particularly popular with the public at large—it tends to be thought of as intellectual music. I doubt you could find a single popular song in history that uses many wide melodic leaps in a melody (the possible exception might be a novelty song of some sort, probably about robots.)
Why is this? One could argue that such big leaps are difficult to play. There’s some truth to that. On almost any instrument it’s easier to run up a scale than to leap about the melodic range of an instrument.
But I suspect something else is at work. An argument can be made that early man evolved to find beauty in the sounds he heard around him. And in nature there are very few examples of sounds leaping around melodically. Most animal calls are fairly stepwise (though there are some wild bird calls out there.) The vibrating sounds of wind and waterfalls tend to fluctuate subtly. They sounds of nature are like hills and valleys much more so than sharp cliffs.
Thus I have spoken.
October 28th, 2013 by Wil
So Lou Reed just died. I’ve never been a big fan of his; he’s sort of associated with punk rock which is a genre of music I find tedious (though Reed’s proto punk is quite removed from groups like The Sex Pistols or D.R.I.)
However, a few years ago I discovered Reed’s album “Growing up in Public” and I loved it. It has tons of great songs, including this one (which has a certain prescience being that Reed died from complications from a liver replacement.)
Here’s the title track from the album which is also great. Listen to that great cello-esque bass line.
October 18th, 2013 by Wil
A while back I was reading David Cope’s book on the idea of computer created music. Cope is using computers to create music that sounds like it was written by humans, and that’s a laudable, worthwhile goal. But I find myself wondering if the real value of computer music would be creating music only computers could write. Could computers create music that would tax the compositional abilities of mere humans?
What would this music be like? I suppose really long pieces – songs that go on for hours or days might be an example. So too could music that requires an incredible attention to detail, like music with very precise rules about how notes vary their frequency. Or perhaps a kind of variations-on-a-theme process that could generate endless variations on a melody.
Such music might be interesting, but I’ll grant you it might be quite boring; some people these days can barely pay attention to a ten minute song, much less a day long one. I think this music would be probably fall into the category of “furniture music”; music meant to be in the background.
Today I came across the website electricsheep.org. This site has a collection of computer created artwork (non-representational art) generated by computers. The computers use a genetic algorithm which is essentially a software process that duplicates the process living creatures go through to evolve. As I understand it, genetic algorithms introduce mutations into output and if those mutations are beneficial, they are adapted as traits. If the mutations are not beneficial it drops them.
I know that’s a bit hard to understand so let me explain with a practical example. Let’s say the electric sheep computer(s) start(s) off with a big red circle. Some possible variations could be “Make the circle blue,” “make the circle more square,” “fill in the circle with polka dots” etc. The traits that succeed get added to the artwork’s “DNA” and carry forth into new generations with additional mutations. But what constitutes the idea of success in this realm? People rank the artwork on the site. Top ranked artwork is more successful that lower ranked art.
Could such a process be applied to computer music? What’s electric sheep doing exactly? It’s taking shapes – circles, squares, grids, cloud type shapes, and changing them. Could something similar happen with music? First we’d have to find some corollaries to shapes in music. This could be chords, melodies, maybe even rhythms. Could a genetic algorithm be applied that morphed these music elements, and then tracked listener preferences, adopting the high scoring mutations into the music’s DNA? That’s the kind of music only a computer could compose.
One final point here. I suspect one other area worth exploring in computer generated music would be microtonal music. This is…
…music using microtones—intervals of less than an equally spaced semitone. Microtonal music can also refer to music which uses intervals not found in the Western system of 12 equal intervals to the octave.
Manipulating music on that level seems like something computers would be good for.
October 13th, 2013 by Wil
Erle Stanley Gardner is the author famous for creating Perry Mason. He was also noted for his prolific output; he wrote 82 Perry Mason novels in his career! How did he do it? By using the plot wheel. (Demo of the wheel at the link.)
Key to Gardner’s remarkable output was his use of the plot wheels invented and patented by another of his successors, a British crime novelist named Edgar Wallace. By using different combinations of possible twists and turns for both major and minor characters, Gardner was able to construct narratives that held his readers rapt for several decades.
Crime fiction web site The Kill Zone elucidates…
When Gardner kept getting rejection slips that said “plot too thin,” he knew he had to learn how to do it. After much study he said he “began to realize that a story plot was composed of component parts, just as an automobile is.” He began to build stories, not just make them up on the fly. He made a list of parts and turned those into “plot wheels” which was a way of coming up with innumerable combinations. He was able, with this system, to come up with a complete story idea in thirty seconds.
I’ve been intrigued enough by the concept of a random plot generator to start work on a very basic music idea generator. It doesn’t actually write music; it’s merely a list of ways to accompany or dress up a basic tune (for example, by harmonizing a melody in thirds, or applying Bach style counterpoint to the melody.) I’m not randomly generating options though I might try and add that component later (though I would certainly use my discretion in choosing whether to follow the options it produces.)
But why would one want a plot generator or a music idea generator? Why not use the wonderful tool of human creativity? Mainly to overcome a problem that’s all to prevalent these days, the problem of too many options. When constructing a plot it’s very easy to say, “Our hero goes to Istanbul, no wait, Marrakech, no, Tripoli, and there he finds a golden sword, no wait, a magic coffee cup, no, wait, a mystical ashtray and then he…” You get the picture. Stories can suffer analysis paralysis if you can’t cordon your options in. The same goes with music and probably all creative processes. If we had all the time in the world then we could explore all the possibilities, but we seldom do.
The challenge of the “too many options” situation is that you have to know what to throw away. A plot wheel, or my proposed more advanced music idea generator basically uses chance to make these decisions. (A bit like John Cage’s chance derived music.) This isn’t a bad way to get the ball rolling though it probably results in somewhat hokey, discombobulated output. But if you want to knock something out, or are at a standstill, it’s a legitimate option.
This approach isn’t limited to creative processes, by the way. I used to go to movie rental stores and walk the aisles for close to a hour looking for the perfect movie. I probably would have been better off going to a section I liked (horror or independent cinema), throwing a dart and taking whatever it landed on.
My sense is that in this ever expanding world of choices – of 300 channel television, of a world of entertaining web pages (none more so than acid logic), of cheap travel, of Spotify and its collection of 300 trillion cds (I’m making that number up), of internet dating sites with hundreds of profiles etc. etc. – the problem of how to choose has become more daunting. A lot of technology evangelists say, “more choices are better,” but it many ways they are not. The process of choosing puts a heavy load on our brain. It literally tires us out. That’s why I feel choice shortcuts, like plot or music generators, have value.
This idea that to function efficiently one must eliminate unneeded information is not limited to people. The brain does the same thing. Here’s an interesting passage from Ray Kurzweil’s book “How to Create a Mind.”
[Vision scientists] showed that optic nerves carry ten to twelve output channels, each of which carries only a small amount of data about a given scene. One group of what are called ganglion cells sends information about edges (changes in contrast). Another group detects only large areas of uniform color, whereas a third group is sensitive only to the backgrounds behind figures of interest.
“Even though we think we see the world fully, what we are receiving is really just hints, edges in space and time,” says Werblin. “Those 12 pictures of the world constitute all the information we will ever have about what’s out there, and from those 12 pictures, which are so sparse, we reconstruct the richness of the visual world.
Kurzweil then notes…
This data reduction is what in the AI [artificial intelligence] field we call “sparse coding.” We have found in creating artificial systems that throwing most of the input information away and retaining only the most salient details provides superior results. Otherwise the limited ability to process information in a neocortex (biological or otherwise) gets overwhelmed.
So the brain has figured out how to allow passage of only essential information… to chose only the best channels from the 300 channel television, so to speak.
October 8th, 2013 by Wil
I’m sure most people are, by now, sick of me repeating my belief that emotions are merely physical sensations felt in the body, often the viscera. (If you’re not, here’s a good, detailed rundown.) Basically, I see the process as a computational one. Your brain received some input, say, your girlfriend announcing that she’s been having an affair with your brother, and your brain/body outputs emotion in the form of felt changes in the body like a stomach ache, the tightening of the chest, involuntary gnashing of teeth etc.
I’ve been working on a score for a short horror film lately and am realizing how much of my job is to program the viewer’s brain to have an emotional response. So if character is walking towards a house with a killer in it, I use music to ratchet up the tension, to cause chills to run down the viewer’s spine (or some similar symptom of fear.) Am I succeeding at this? In some cases yes, in others no. It’s a delicate art, one I haven’t really figured out. It’s a matter of learning what specific musical “tools” cause what specific emotional reactions. With horror you end up working with a lot of dissonant chords and melodies, even getting into atonal music. (Atonal means there’s no clear main chord that the music can resolve to. This works perfectly for scenes of unresolved ambiguity.)
Ultimately, it would be nice to really map out the connections between music and emotions so that you could literally program people’s emotional by playing a piece of music. Then I could program unwilling victims to become my army of the night, to go forth and commit heinous acts in my name. And when the police arrived at my doorstep and I would merely blush and say, “What, me? I’m just sitting here playing the piano.”
October 2nd, 2013 by Wil
I continue to read David Cope’s “Computer Models of Creativity” which documents his process of creating computer software that can compose music. One point he makes is that context plays into how we respond to music. If we know a musician led a troubled, tragic life we imbue their music with a certain emotional resonance that might not really be there. Or, if we are told the music is about something meaningful, we hear meaning. Cope tells a story of composing a piece of music mainly as an exercise. He was then asked to compose a piece of music for a friend’s memorial service. Being short on time, he used the aforementioned composition. People at the memorial commented on the sadness and “funereal sense” the music provided, even though the music was written as an academic excercise.
In the book, Cope describes another contextual property of music: its uniqueness! He explains…
Since 1980, I have made extraordinary attempts to have Experiments in Musical Intelligence’s [his computer composition software] works performed. Unfortunately, my successes have been few. Performers rarely consider these works seriously. A friend of mine has noted the intimidating nature of the number of outputs possible from computer programs. Uniqueness, he feels, is an extremely important factor in human aesthetics. Knowing that my programs represent an almost infinite font of such works apparently renders them less interesting, no matter how beautiful and different from one another they may be. For many, knowing that I could restart my program at any time, and program a thousand more works, apparently lessens their interest in the one. … This sense of uniqueness is heightened by the fact that for human-created works at least, composers die.
Speaking to that last point, we see this all the time. Jimi Hendrix is alive and well and that 45 he recorded ten years back is worth X dollars. Suddenly he dies and it’s worth much more, even though it’s the same item it was a day previous.
And I think we all understand the general sense Cope is speaking of in that paragraph. It is why a handmade item is worth much more than a factory assembled item which may be of much sturdier construction. This is why people pay millions of dollars for a painting and 30 bucks tops for a poster.
But why does uniqueness drive value? Evolutionary psychology posits a general answer. Those who possess unique things are demonstrating their power and power is an aphrodisiac which increases your ability to pass on genes etc.
I wonder whether we are entering an age of computer produced art, music, film, fiction and what not, and whether that emergence of that age will deflate the market for creative products. I don’t simply ask whether we will pay less for the arts, but whether will we actually enjoy them less? Will knowing that the music we are listening to could have been created in a nanosecond by an artificial intelligence program (regardless of whether it actually was) deprive us of it’s pleasures?
In closing, I ask you to make note of my subtle yet dramatic use of italicization in this post.
October 1st, 2013 by Wil
Lately I’ve been reading a book called “Computer Models of Musical Creativity” by David Cope. Cope is a musician and programmer who has created software which composes classical music, usually within the style of existing historical composers. The method by which the software does this is complex – you basically have to read the book to understand it – but it does create “human sounding” music that is good, if not great.
One question I’ve had while reading the book is why Cope limits himself to (western) classical music. He explains…
Popular music, for the most part, relies on lyrics, particular timbres, performance context, and many other factors my program cannot control. The mere fact that we know most popular music by its performer, rather than its composer, should confirm the problem.
(Italics are Cope’s.)
He makes a good case, especially in regards to timbre. A song originally played on electric guitar but transferred to zither will not have the same impact. The electric guitar has a certain beefy, manly machismo that gets lost with other instruments. Cope’s point is that it’s not the notes themselves that drive, say, “Black Dog” by Zeppelin, but the notes combined with the guitar tone and various other factors and nuances of performance. On the flip side, Bach’s first invention is largely driven by the notes on the page (combined, one hopes, with a good performance.)
Nonetheless, I don’t see why Cope or some other programmer couldn’t create music that takes timbre into account. I could envision a music creation program that tracks trends in instrument timber and then predicts what will be next and generates some very hip music!
Here’s an example of some of Cope’s music. It’s a bit stiff as it is being rendered by a computer (as opposed to being played by a human (which it could be)), but gives you the picture.
September 30th, 2013 by Wil
For years I’ve gnashed my teeth while reading idiotic articles that present the history of rock and roll through the lens of punk rock. According to these authors, rock was born a free and rebellious movement, was co-opted by corporate America in the 70s, tried to wrestle free via the punk and grunge movements (insert tear stained worship of Saint Cobain here) and was then finally put to death. These authors never concede that many forces have affected Rock through its history and they certainly never concede that punk rock is – by and large – absolutely worthless dog feces disguised (poorly) as music.
As such, it was quite a pleasure to come across one of these articles and find that it is almost universally panned in the comments section. I can only imagine the young author thought he would achieve some degree of acclaim by parroting the talking points of his Sociology 101 professor but instead found himself mocked and humiliated by his peers. I only pray that such a virtual ass kicking leads him to experience a lifetime of sexual inadequacy.
You can read the piece here: How Technology Killed Rock And Roll. I’ll highlight some of the great replies.
“This is the least cohesive article I’ve ever read on MTT.
Really? Rock n’ roll is dead because of technology? Really?”
“Oh noes! You’ve pushed at your straw man with all your might, and now it’s fallen over.”
“Sorry but this article is complete and utter nonsense,”
“We need a timeline on public announcements that rock and roll was dead, starting maybe in the 1950′s, when it was about to be replaced by trad jazz.”
And on and on…