Category Archives: Technology

How to innovate! (Don’t be too innovative.)

As a society, or species, (or whatever we are), we tend to laud forward thinking creative geniuses. When we find one, we hoist them onto a pedestal and treat them as an (to quote this article) “Übermensch [who] stands apart from the masses, pulling them forcibly into a future that they can neither understand nor appreciate.” This is true across all disciplines. Think of Einstein, Beethoven, Picasso, on and on.

So how does one become a genius? Clearly you have to innovate, to do something no one else has done. But there’s a catch here. You can’t be too innovative. You can’t be so ahead of the curve that nobody can really grasp what you’re saying or doing.

Let me propose a thought experiment. Jimi Hendrix travels to Vienna around 1750 and plays his music. Would he be lauded as a genius? Would his guitar playing be heard as the obvious evolution of current trends in music? No, he’d probably be regarded as an idiot making hideous noise and he might be burned at the stake.

But, let music evolve for around 220 years and yes, Jimi is rightfully regarded as a genius. His sound was made palatable by those who came before him, mainly electric blues guitar players of the 50s and 60s. (Obviously there are a lot of other factors (like race and class and sex) relevant to whom gets crowned a genius but I’m painting in broad strokes here.)

So the trick to being a genius is to be ahead of your time but not too ahead. The world of science and medicine is filled with examples. Gregor Mendel famously discovered that physical traits could be passed from one generation of life to another. In what was a major breakthrough in our understanding of biology, he theorized what we came to call genes. He published his results and was met with pretty much total indifference. It wasn’t until his work was rediscovered decades later that we applied them. Mendel was too ahead of his time.

The book “The Mind’s I” notes the mathematician Giovanni Girolamo Saccheri who contributed to the discovery of non-Euclidian geometry. His ideas were so controversial that even Saccheri himself rejected them! (At least he did according to the book; there seems some debate on this. See the last graph on the Saccheri wiki page.) Talk about being too ahead of your time.

But perhaps the best example of this sort of thing is Ignaz Semmelweis. The Hungarian physician…

…discovered that the incidence of puerperal fever could be drastically cut by the use of hand disinfection in obstetrical clinics.

That’s right, he basically came up with the crazy idea that doctors should wash their hands after touching sick people. Unfortunately…

Despite various publications of results where hand-washing reduced mortality to below 1%, Semmelweis’s observations conflicted with the established scientific and medical opinions of the time and his ideas were rejected by the medical community. Some doctors were offended at the suggestion that they should wash their hands and Semmelweis could offer no acceptable scientific explanation for his findings. Semmelweis’s practice earned widespread acceptance only years after his death, when Louis Pasteur confirmed the germ theory and Joseph Lister, acting on the French microbiologist’s research, practiced and operated, using hygienic methods, with great success.

Oh well. Semmelweis probably still had a great career and life right?

Umm, no.

In 1865, Semmelweis was committed to an asylum, where he died at age 47 after being beaten by the guards, only 14 days after he was committed.

Don’t be too ahead of the curve folks.

Where are Photoshop filters for music?

Most people are somewhat familiar with Photoshop, the image editing program that has turbo charged the graphic design industry. And I think most people are generally aware that Photoshop has what are called “filters”—tools with which one can take an ordinary photograph and turn it into a blurry, Monet style painting, or a pointillist masterpiece, or a piece of pop art a la Lichtenstein. Here, for example, is an example of a filter that gives an image a black and white comic book effect.

How these filters work is a bit beyond me but one can assume the processes are tailor made for computation. To consider one example, if we realize that a color’s saturation level can be assigned a number, we then realize that to make an image desaturated (in the style of a water color painting) we could create a computational rule like: for each pixel with a saturation value higher than [some threshold number], set that pixel’s saturation to -20 below its current value. Repeat until it’s below the threshold.

So computation and filters have radically affected what’s possible with visual art. It struck me today, why hasn’t this happened with music?

To some degree it has. With MIDI manipulation software, it’s quite easy to swap one synth sound out for another—to make what was initially a trumpet sound like a xylophone, for example. (I find in practice, however, it’s not quite so simple, as how you play a part is dependent in the timbre feedback you get while you play it. When swapping instruments I sometimes have to redo the part.)

You can also easily modulate a piece of music to a new key, so that a piece written in the key of A# can be moved up to C.

But it strikes me that there’s a number of other ways one could use computation tools to make music creation easier. All of the following commands are the kinds of things I would like to be able to request in a program such as Garage Band. I’m using musical terms here that may not be familiar to non-musicians, but I’ll try to keep it simple.

  • Take all the block chords in the selection and turn them into 8th note arpeggios.
  • Harmonize this melody line in thirds.
  • Take my harmony and render it in the style of a ragtime piano. (I think this actually can be accomplished via the software “Band in a Box.”)
  • Take all the instances of a minor chord that precedes a chord a fourth away and change them into dominant 7th chords. (This would have the effect of “jazzing up” the sound of a song. I vi ii V7 would become I VI7 II7 V&.)

These are all “surface level” examples – I can think of plenty of filter ideas that would apply on a more granular level.

My point being that this sort of thing is eminently possible; indeed, it has been for years. Maybe it’s out there and I’m unaware of it, but that would surprise me.

This would of course make the production of music much* easier, and enable the exploration of creative ideas with much less effort. That said, it’s valid concern that this might make the world of music much worse, creating an mob of middling Mozarts who could render listenable but fundamentally undisciplined music. (I would likely fall into this group.) It’s reasonable to argue that the path to compositional virtuosity should require a degree of effort to travel. But these concerns are exactly the sort of thing I think we’re going to be confronting soon enough anyway.

*I originally mistyped this word as “mush” which is ironic since such software tools might result in a lot of musical mush.

Our idiotic natural state

I’ve spent quite a bit of time here arguing that technologically enabled fast paced communication (e.g. email, twitter, texting, facebook etc.) has had the effect of making us easily distractible, constantly searching for the next “hit” of information. But this op-ed piece by the author of “The Shallows: What the Internet Is Doing to Our Brains” makes an good point: being easily distracted was our natural state for a long time.

Reading a long sequence of pages helps us develop a rare kind of mental discipline. The innate bias of the human brain, after all, is to be distracted. Our predisposition is to be aware of as much of what’s going on around us as possible. Our fast-paced, reflexive shifts in focus were once crucial to our survival. They reduced the odds that a predator would take us by surprise or that we’d overlook a nearby source of food.

It was only after we achieved levels of relative peace and security that we could focus in on things. It would not surprise me if the distracting presence of the interweb completely rolls back tens of thousands of years of human progress within a generation.

Who owns what?

I’ve mentioned that I’ve been reading Jaron Lanier’s “You Are Not a Gadget,” a tome that bemoans (or should I say “a bemoaning tome”) the free economy which has overtaking music, much of writing (you aren’t paying for this blog post, for example) and possibly soon, movies. Last night I dug up some of Lanier’s various TV appearances on you tube. (I did not pay to view them of course.)

Fundamentally Lanier is getting at the question of how we valuate things. Obviously we’ve long used markets to do so, though they have always been affected by external manipulations e.g. tariffs, price setting, caps by government or industry on how much of something can be produced etc.

If we look at music we can note that music used to be worth something—generally about a dollar a song though that’s a flawed estimate— and now it’s worth much less. It’s hard to really say what a song is worth these days. I guess they still sell for 49 cents to 99 cents over at iTunes, but most people can dig up any song they want to hear on piracy sites or youtube or Spotify. I haven’t paid to listen to music for years unless I’m buying a friend’s music (and even then I grumble.)

Have markets decided that music has no value? It’s a bit more complex than that. Markets are dependent on the state to enforce the notion of private property. If I can just take want I want, markets really have purpose (at least to me, the person doing the taking.) The debate in the world of music right now is over what the product is an who owns it. If I buy a song, am I free to make a digital copy of it and send it to my friends? Technically, in the eyes of the law, no, but realistically, yes, insomuch that laws that aren’t enforced are worthless.

I tend to side against the “free information/piracy” types, but I do concede these are hard questions to answer. How can anyone really own what is essentially information on a computer?

And I’ll entertain even more Marxist thoughts. Let’s look at the realm of physical objects. A chair, say. Some guy cuts down a tree and makes a chair which I buy with my money. Did he really “own” that tree? Maybe it was on his land but how did he get that land? Did an ancestor of his take it from Indians who themselves had no real sense of ownership (since they were hunter-gatherer types who just wandered around)? At some point the earth had no intelligent creatures on it – who owned everything then?

On some level these are silly questions, but I think you get my point. The very premise of ownership of anything is somewhat shaky.

Anyway, Lanier is trippy to watch so I will include a video here.

On “You Are Not a Gadget”

I’ve just started reading a book that I’ve mentioned being interested in: Jaron Lanier’s “You Are Not a Gadget.” The book is something of a condemnation of aspects of modern Internet culture, made all the more damning by the fact that Lanier is technologist who played a role the development of the web. Many of the “pro-Internet” views he takes on belong to good friends of his.

One argument he makes is that eccentricity—the expression of unique behaviors and ideas—is being removed from modern culture. Part of this is because of the mob-like nature of Internet comments sections. As I have noticed, in many Internet forums a consensus view often develops among the participants. Those who express opinions different from this view are either mocked or ignored (as I have been until I gave up on opinion forums.) People tow the party line and are not exposed to ideas that may challenge their views. And, as has been well commented on, people gravitate towards blogs and sites that correspond to their world view, further isolating their thought processes.

(Related to this: I once argued that the fluid communication the web enables makes one realize just how hard it is to be unique.)

Lanier also sees individuality taking a hit on social networking sites like Facebook. In the mid 90s people defined themselves on the web via home pages, many of which were housed on now deceased hosting site geocities. I remember these pages and you probably do too. They were often amateurish in design and usually had god-awful background tiles that made text unreadable. But they had personality. It was hard to confuse one person’s home page for another’s. The same is not true with Facebook—most people’s pages look basically the same. (Yes, you get your own header but that’s not much.)

Now the fact that everyone’s Facebook pages look similar is hardly the greatest calamity facing society. But I get Lanier’s point. It’s one more chip away from the idea of individuality, of personality. The Internet is not encouraging individuation, but a borg-like assimilation into a mono culture. I predict this will cause the death of all humanity within 20 years.

Are things getting worse?

In some quarters that seems to be the perception. Life is getting uglier and more unstable, global violence more pandemic etc. In “How to Create a Mind,” Ray Kurzweil’s new book, Kurzweil notes that…

…a Gallup poll released on May 4, 2011, revealed that only “44 percent of Americans believe that today’s youth will have a better life than their parents.”

Why is this? Kurzweil offers an interesting explanation, one that mirrors arguments I’ve made. There’s just a lot of information flying around overwhelming people. Kurzweil writes:

A primary reason people believe that life is getting worse is because our information about the problems of the world has steadily improved. If there is a battle today somewhere on the planet, we experience it almost as if we were there. During World War II, tens of thousands of people might perish in battle, and if the public could see it at all it was a grainy newsreel in a theater weeks later. During World War I a small elite could read about the progress of the conflict in a newspaper (without pictures). During the 19th century there was almost no access to news in a timely fashion for anyone.

In short, people were blessedly ignorant.

Interestingly, neither Kurzweil nor America’s other great mind – myself – are the first to comment on this problem. In an article entitled “Only Disconnect” in the October 26 New Yorker, the German theorist Siegfried Kracauer is quoted. In 1924, he said…

A tiny ball rolls toward you from very far away, expands into a close-up, and finally roars right over you. You can neither stop it nor escape it, but lie there chained, a helpless little doll swept away by the giant colossus in whose ambit it expires. Flight is impossible. Should the Chinese imbroglio be tactfully disembroiled, one is sure to be harried by an American boxing match… All the world-historical events on this planet—not only the current ones but also past events, whose love of life knows no shame—have only one desire: to set up a rendezvous wherever they suppose us to be present.

In 1924 people though the news was roaring over them! This guy’s head would have exploded if he saw Sean Hannity or Rachel Maddow.

Are we overdiagnosed by doctors?

A recurring theme on this blog is my contention that medical care in this country (and probably a large part of the first world) is a joke. As I argued here, Doctors are incentivized to offer or order care that may not be actually needed.

Recently I stumbled across an op-ed piece (written by a Dartmoth professor who has a book out entitled “Overdiagnosed.”) It adds some interesting information to this whole debate. In describing the analysis of one doctor who examined how medical care is dispensed, the article states…

Jack went on to document similarly wildly variable medical practices in the other New England states. But it wasn’t until he compared two of the nation’s most prominent medical communities — Boston and New Haven, Conn. — that the major medical journals took notice. In the late 1980s, both the Lancet and the New England Journal of Medicine published the findings that Boston residents were hospitalized 60% more often than their counterparts in New Haven. Oh, by the way, the rate of death — and the age of death — in the two cities were the same.

So, two populations were getting quite disparate amounts of medical care but were in the same state of health. Observations such as this led the development of medical care epidemiology, the science of studying the effects of medicine.

Medical care epidemiology examines the effect of exposure to medical care: how differential exposure across time and place relates to population health outcomes. It acknowledges that medical care can produce both benefits and harms, and that conventional concerns about underservice should be balanced by concerns about overdiagnosis and overtreatment. Think of it as surveillance for a different type of outbreak: outbreaks of diagnosis and treatment.

Should we allow computer politicians?

I’ve talked a bit about computers and robots replacing humans in various vocations. It struck me today that we should consider creating computer politicians. After all, could they do any worse than humans?

What would a computer politician be? Obviously it would have to be some sort of collection of artificial intelligence modules. Ideally it would have a knowledge base of existing laws, history, geography, world politics, etc.

A computer politician on a regional level would have to represent its voters against the wishes of other regions. For instance, a computer politician would try to get a airplane manufacturing plant built in its region, not one state over.

What if a computer candidate ran against a human candidate? Would the computer candidate be able to tout its strengths over an opponent? Maybe… possibly… a computer candidate could very strongly make the claim that it would be incorruptible, that it would not stray from its mission to serve the needs of its voters (be they on a national or regional level.) Obviously it would be immune to sexual dalliances as well, such as those that recently tanked the careers of Bob Filner and Anthony Weiner. And a computer could show that it is programmed not to lie. All these attributes make a computer candidate quite appealing

Obviously most of this is outside the province of existing artificial intelligence technology. But that might not always be the case.

Robot cars

A recent L.A. Times article covered the topic of self driving cars. The gist is that they’re real, they’re coming and they could be on the road by 2020. This is not to say there aren’t concerns.

“It is uncharted waters,” said James Yukevich, a Los Angeles attorney who defends the auto industry from product liability lawsuits. “I don’t think this is an area very many people have thought much about.”

Coddled by robotic chauffeurs, would people retain the driving skills to take over in emergencies? Who would be liable if an autopiloted car runs through a crowd of pedestrians: the owner or the automaker? Would insurance premiums go up or down? Would cyberterrorists figure out how to make Fords blast through school zones at 100 mph?

The article doesn’t explore what I think would be a likely effect from such technology: loss of jobs. Would robot cars effectively put every cab driver out of business? After all, why should a cab company hire a sweaty Armenian to drive cabs around town when a robot car will happily do it without asking for a smoke break? For that matter, what about the transportation industry? Will robots trucks drive around the nation’s manufactured goods?

I suspect in coming years, after mankind has made itself obsolete with its own technology, many will ask, “Why didn’t we see this coming? Why did no one warn us?” At which point I will step out from behind the curtains and say, “Well, if you had been reading my blog you would have been warned.” Then my robots will kill them.

The plot wheel and random idea generators

Erle Stanley Gardner is the author famous for creating Perry Mason. He was also noted for his prolific output; he wrote 82 Perry Mason novels in his career! How did he do it? By using the plot wheel. (Demo of the wheel at the link.)

Key to Gardner’s remarkable output was his use of the plot wheels invented and patented by another of his successors, a British crime novelist named Edgar Wallace. By using different combinations of possible twists and turns for both major and minor characters, Gardner was able to construct narratives that held his readers rapt for several decades.

Crime fiction web site The Kill Zone elucidates…

When Gardner kept getting rejection slips that said “plot too thin,” he knew he had to learn how to do it. After much study he said he “began to realize that a story plot was composed of component parts, just as an automobile is.” He began to build stories, not just make them up on the fly. He made a list of parts and turned those into “plot wheels” which was a way of coming up with innumerable combinations. He was able, with this system, to come up with a complete story idea in thirty seconds.

I’ve been intrigued enough by the concept of a random plot generator to start work on a very basic music idea generator. It doesn’t actually write music; it’s merely a list of ways to accompany or dress up a basic tune (for example, by harmonizing a melody in thirds, or applying Bach style counterpoint to the melody.) I’m not randomly generating options though I might try and add that component later (though I would certainly use my discretion in choosing whether to follow the options it produces.)

But why would one want a plot generator or a music idea generator? Why not use the wonderful tool of human creativity? Mainly to overcome a problem that’s all to prevalent these days, the problem of too many options. When constructing a plot it’s very easy to say, “Our hero goes to Istanbul, no wait, Marrakech, no, Tripoli, and there he finds a golden sword, no wait, a magic coffee cup, no, wait, a mystical ashtray and then he…” You get the picture. Stories can suffer analysis paralysis if you can’t cordon your options in. The same goes with music and probably all creative processes. If we had all the time in the world then we could explore all the possibilities, but we seldom do.

The challenge of the “too many options” situation is that you have to know what to throw away. A plot wheel, or my proposed more advanced music idea generator basically uses chance to make these decisions. (A bit like John Cage’s chance derived music.) This isn’t a bad way to get the ball rolling though it probably results in somewhat hokey, discombobulated output. But if you want to knock something out, or are at a standstill, it’s a legitimate option.

This approach isn’t limited to creative processes, by the way. I used to go to movie rental stores and walk the aisles for close to a hour looking for the perfect movie. I probably would have been better off going to a section I liked (horror or independent cinema), throwing a dart and taking whatever it landed on.

My sense is that in this ever expanding world of choices – of 300 channel television, of a world of entertaining web pages (none more so than acid logic), of cheap travel, of Spotify and its collection of 300 trillion cds (I’m making that number up), of internet dating sites with hundreds of profiles etc. etc. – the problem of how to choose has become more daunting. A lot of technology evangelists say, “more choices are better,” but it many ways they are not. The process of choosing puts a heavy load on our brain. It literally tires us out. That’s why I feel choice shortcuts, like plot or music generators, have value.

This idea that to function efficiently one must eliminate unneeded information is not limited to people. The brain does the same thing. Here’s an interesting passage from Ray Kurzweil’s book “How to Create a Mind.”

[Vision scientists] showed that optic nerves carry ten to twelve output channels, each of which carries only a small amount of data about a given scene. One group of what are called ganglion cells sends information about edges (changes in contrast). Another group detects only large areas of uniform color, whereas a third group is sensitive only to the backgrounds behind figures of interest.

“Even though we think we see the world fully, what we are receiving is really just hints, edges in space and time,” says Werblin. “Those 12 pictures of the world constitute all the information we will ever have about what’s out there, and from those 12 pictures, which are so sparse, we reconstruct the richness of the visual world.

Kurzweil then notes…

This data reduction is what in the AI [artificial intelligence] field we call “sparse coding.” We have found in creating artificial systems that throwing most of the input information away and retaining only the most salient details provides superior results. Otherwise the limited ability to process information in a neocortex (biological or otherwise) gets overwhelmed.

So the brain has figured out how to allow passage of only essential information… to chose only the best channels from the 300 channel television, so to speak.