Archive for the 'Technology' Category
November 24th, 2013 by Wil
I’ve mentioned that I’ve been reading Jaron Lanier’s “You Are Not a Gadget,” a tome that bemoans (or should I say “a bemoaning tome”) the free economy which has overtaking music, much of writing (you aren’t paying for this blog post, for example) and possibly soon, movies. Last night I dug up some of Lanier’s various TV appearances on you tube. (I did not pay to view them of course.)
Fundamentally Lanier is getting at the question of how we valuate things. Obviously we’ve long used markets to do so, though they have always been affected by external manipulations e.g. tariffs, price setting, caps by government or industry on how much of something can be produced etc.
If we look at music we can note that music used to be worth something—generally about a dollar a song though that’s a flawed estimate— and now it’s worth much less. It’s hard to really say what a song is worth these days. I guess they still sell for 49 cents to 99 cents over at iTunes, but most people can dig up any song they want to hear on piracy sites or youtube or Spotify. I haven’t paid to listen to music for years unless I’m buying a friend’s music (and even then I grumble.)
Have markets decided that music has no value? It’s a bit more complex than that. Markets are dependent on the state to enforce the notion of private property. If I can just take want I want, markets really have purpose (at least to me, the person doing the taking.) The debate in the world of music right now is over what the product is an who owns it. If I buy a song, am I free to make a digital copy of it and send it to my friends? Technically, in the eyes of the law, no, but realistically, yes, insomuch that laws that aren’t enforced are worthless.
I tend to side against the “free information/piracy” types, but I do concede these are hard questions to answer. How can anyone really own what is essentially information on a computer?
And I’ll entertain even more Marxist thoughts. Let’s look at the realm of physical objects. A chair, say. Some guy cuts down a tree and makes a chair which I buy with my money. Did he really “own” that tree? Maybe it was on his land but how did he get that land? Did an ancestor of his take it from Indians who themselves had no real sense of ownership (since they were hunter-gatherer types who just wandered around)? At some point the earth had no intelligent creatures on it – who owned everything then?
On some level these are silly questions, but I think you get my point. The very premise of ownership of anything is somewhat shaky.
Anyway, Lanier is trippy to watch so I will include a video here.
November 21st, 2013 by Wil
I’ve just started reading a book that I’ve mentioned being interested in: Jaron Lanier’s “You Are Not a Gadget.” The book is something of a condemnation of aspects of modern Internet culture, made all the more damning by the fact that Lanier is technologist who played a role the development of the web. Many of the “pro-Internet” views he takes on belong to good friends of his.
One argument he makes is that eccentricity—the expression of unique behaviors and ideas—is being removed from modern culture. Part of this is because of the mob-like nature of Internet comments sections. As I have noticed, in many Internet forums a consensus view often develops among the participants. Those who express opinions different from this view are either mocked or ignored (as I have been until I gave up on opinion forums.) People tow the party line and are not exposed to ideas that may challenge their views. And, as has been well commented on, people gravitate towards blogs and sites that correspond to their world view, further isolating their thought processes.
(Related to this: I once argued that the fluid communication the web enables makes one realize just how hard it is to be unique.)
Lanier also sees individuality taking a hit on social networking sites like Facebook. In the mid 90s people defined themselves on the web via home pages, many of which were housed on now deceased hosting site geocities. I remember these pages and you probably do too. They were often amateurish in design and usually had god-awful background tiles that made text unreadable. But they had personality. It was hard to confuse one person’s home page for another’s. The same is not true with Facebook—most people’s pages look basically the same. (Yes, you get your own header but that’s not much.)
Now the fact that everyone’s Facebook pages look similar is hardly the greatest calamity facing society. But I get Lanier’s point. It’s one more chip away from the idea of individuality, of personality. The Internet is not encouraging individuation, but a borg-like assimilation into a mono culture. I predict this will cause the death of all humanity within 20 years.
October 27th, 2013 by Wil
In some quarters that seems to be the perception. Life is getting uglier and more unstable, global violence more pandemic etc. In “How to Create a Mind,” Ray Kurzweil’s new book, Kurzweil notes that…
…a Gallup poll released on May 4, 2011, revealed that only “44 percent of Americans believe that today’s youth will have a better life than their parents.”
Why is this? Kurzweil offers an interesting explanation, one that mirrors arguments I’ve made. There’s just a lot of information flying around overwhelming people. Kurzweil writes:
A primary reason people believe that life is getting worse is because our information about the problems of the world has steadily improved. If there is a battle today somewhere on the planet, we experience it almost as if we were there. During World War II, tens of thousands of people might perish in battle, and if the public could see it at all it was a grainy newsreel in a theater weeks later. During World War I a small elite could read about the progress of the conflict in a newspaper (without pictures). During the 19th century there was almost no access to news in a timely fashion for anyone.
In short, people were blessedly ignorant.
Interestingly, neither Kurzweil nor America’s other great mind – myself – are the first to comment on this problem. In an article entitled “Only Disconnect” in the October 26 New Yorker, the German theorist Siegfried Kracauer is quoted. In 1924, he said…
A tiny ball rolls toward you from very far away, expands into a close-up, and finally roars right over you. You can neither stop it nor escape it, but lie there chained, a helpless little doll swept away by the giant colossus in whose ambit it expires. Flight is impossible. Should the Chinese imbroglio be tactfully disembroiled, one is sure to be harried by an American boxing match… All the world-historical events on this planet—not only the current ones but also past events, whose love of life knows no shame—have only one desire: to set up a rendezvous wherever they suppose us to be present.
In 1924 people though the news was roaring over them! This guy’s head would have exploded if he saw Sean Hannity or Rachel Maddow.
October 20th, 2013 by Wil
A recurring theme on this blog is my contention that medical care in this country (and probably a large part of the first world) is a joke. As I argued here, Doctors are incentivized to offer or order care that may not be actually needed.
Recently I stumbled across an op-ed piece (written by a Dartmoth professor who has a book out entitled “Overdiagnosed.”) It adds some interesting information to this whole debate. In describing the analysis of one doctor who examined how medical care is dispensed, the article states…
Jack went on to document similarly wildly variable medical practices in the other New England states. But it wasn’t until he compared two of the nation’s most prominent medical communities — Boston and New Haven, Conn. — that the major medical journals took notice. In the late 1980s, both the Lancet and the New England Journal of Medicine published the findings that Boston residents were hospitalized 60% more often than their counterparts in New Haven. Oh, by the way, the rate of death — and the age of death — in the two cities were the same.
So, two populations were getting quite disparate amounts of medical care but were in the same state of health. Observations such as this led the development of medical care epidemiology, the science of studying the effects of medicine.
Medical care epidemiology examines the effect of exposure to medical care: how differential exposure across time and place relates to population health outcomes. It acknowledges that medical care can produce both benefits and harms, and that conventional concerns about underservice should be balanced by concerns about overdiagnosis and overtreatment. Think of it as surveillance for a different type of outbreak: outbreaks of diagnosis and treatment.
October 19th, 2013 by Wil
I’ve talked a bit about computers and robots replacing humans in various vocations. It struck me today that we should consider creating computer politicians. After all, could they do any worse than humans?
What would a computer politician be? Obviously it would have to be some sort of collection of artificial intelligence modules. Ideally it would have a knowledge base of existing laws, history, geography, world politics, etc.
A computer politician on a regional level would have to represent its voters against the wishes of other regions. For instance, a computer politician would try to get a airplane manufacturing plant built in its region, not one state over.
What if a computer candidate ran against a human candidate? Would the computer candidate be able to tout its strengths over an opponent? Maybe… possibly… a computer candidate could very strongly make the claim that it would be incorruptible, that it would not stray from its mission to serve the needs of its voters (be they on a national or regional level.) Obviously it would be immune to sexual dalliances as well, such as those that recently tanked the careers of Bob Filner and Anthony Weiner. And a computer could show that it is programmed not to lie. All these attributes make a computer candidate quite appealing
Obviously most of this is outside the province of existing artificial intelligence technology. But that might not always be the case.
October 14th, 2013 by Wil
A recent L.A. Times article covered the topic of self driving cars. The gist is that they’re real, they’re coming and they could be on the road by 2020. This is not to say there aren’t concerns.
“It is uncharted waters,” said James Yukevich, a Los Angeles attorney who defends the auto industry from product liability lawsuits. “I don’t think this is an area very many people have thought much about.”
Coddled by robotic chauffeurs, would people retain the driving skills to take over in emergencies? Who would be liable if an autopiloted car runs through a crowd of pedestrians: the owner or the automaker? Would insurance premiums go up or down? Would cyberterrorists figure out how to make Fords blast through school zones at 100 mph?
The article doesn’t explore what I think would be a likely effect from such technology: loss of jobs. Would robot cars effectively put every cab driver out of business? After all, why should a cab company hire a sweaty Armenian to drive cabs around town when a robot car will happily do it without asking for a smoke break? For that matter, what about the transportation industry? Will robots trucks drive around the nation’s manufactured goods?
I suspect in coming years, after mankind has made itself obsolete with its own technology, many will ask, “Why didn’t we see this coming? Why did no one warn us?” At which point I will step out from behind the curtains and say, “Well, if you had been reading my blog you would have been warned.” Then my robots will kill them.
October 13th, 2013 by Wil
Erle Stanley Gardner is the author famous for creating Perry Mason. He was also noted for his prolific output; he wrote 82 Perry Mason novels in his career! How did he do it? By using the plot wheel. (Demo of the wheel at the link.)
Key to Gardner’s remarkable output was his use of the plot wheels invented and patented by another of his successors, a British crime novelist named Edgar Wallace. By using different combinations of possible twists and turns for both major and minor characters, Gardner was able to construct narratives that held his readers rapt for several decades.
Crime fiction web site The Kill Zone elucidates…
When Gardner kept getting rejection slips that said “plot too thin,” he knew he had to learn how to do it. After much study he said he “began to realize that a story plot was composed of component parts, just as an automobile is.” He began to build stories, not just make them up on the fly. He made a list of parts and turned those into “plot wheels” which was a way of coming up with innumerable combinations. He was able, with this system, to come up with a complete story idea in thirty seconds.
I’ve been intrigued enough by the concept of a random plot generator to start work on a very basic music idea generator. It doesn’t actually write music; it’s merely a list of ways to accompany or dress up a basic tune (for example, by harmonizing a melody in thirds, or applying Bach style counterpoint to the melody.) I’m not randomly generating options though I might try and add that component later (though I would certainly use my discretion in choosing whether to follow the options it produces.)
But why would one want a plot generator or a music idea generator? Why not use the wonderful tool of human creativity? Mainly to overcome a problem that’s all to prevalent these days, the problem of too many options. When constructing a plot it’s very easy to say, “Our hero goes to Istanbul, no wait, Marrakech, no, Tripoli, and there he finds a golden sword, no wait, a magic coffee cup, no, wait, a mystical ashtray and then he…” You get the picture. Stories can suffer analysis paralysis if you can’t cordon your options in. The same goes with music and probably all creative processes. If we had all the time in the world then we could explore all the possibilities, but we seldom do.
The challenge of the “too many options” situation is that you have to know what to throw away. A plot wheel, or my proposed more advanced music idea generator basically uses chance to make these decisions. (A bit like John Cage’s chance derived music.) This isn’t a bad way to get the ball rolling though it probably results in somewhat hokey, discombobulated output. But if you want to knock something out, or are at a standstill, it’s a legitimate option.
This approach isn’t limited to creative processes, by the way. I used to go to movie rental stores and walk the aisles for close to a hour looking for the perfect movie. I probably would have been better off going to a section I liked (horror or independent cinema), throwing a dart and taking whatever it landed on.
My sense is that in this ever expanding world of choices – of 300 channel television, of a world of entertaining web pages (none more so than acid logic), of cheap travel, of Spotify and its collection of 300 trillion cds (I’m making that number up), of internet dating sites with hundreds of profiles etc. etc. – the problem of how to choose has become more daunting. A lot of technology evangelists say, “more choices are better,” but it many ways they are not. The process of choosing puts a heavy load on our brain. It literally tires us out. That’s why I feel choice shortcuts, like plot or music generators, have value.
This idea that to function efficiently one must eliminate unneeded information is not limited to people. The brain does the same thing. Here’s an interesting passage from Ray Kurzweil’s book “How to Create a Mind.”
[Vision scientists] showed that optic nerves carry ten to twelve output channels, each of which carries only a small amount of data about a given scene. One group of what are called ganglion cells sends information about edges (changes in contrast). Another group detects only large areas of uniform color, whereas a third group is sensitive only to the backgrounds behind figures of interest.
“Even though we think we see the world fully, what we are receiving is really just hints, edges in space and time,” says Werblin. “Those 12 pictures of the world constitute all the information we will ever have about what’s out there, and from those 12 pictures, which are so sparse, we reconstruct the richness of the visual world.
Kurzweil then notes…
This data reduction is what in the AI [artificial intelligence] field we call “sparse coding.” We have found in creating artificial systems that throwing most of the input information away and retaining only the most salient details provides superior results. Otherwise the limited ability to process information in a neocortex (biological or otherwise) gets overwhelmed.
So the brain has figured out how to allow passage of only essential information… to chose only the best channels from the 300 channel television, so to speak.
October 10th, 2013 by Wil
This is a question I feel the internet age has engendered, in relation to both fiction and non-fiction. I’ll tell you why.
Let’s say it’s 40 years ago and you’re writing a book on auto repair. You’re describing a particular procedure and realize that before a person could engage in this procedure they would need to replace their radiator hose. So, you write up a whole section on how to replace a radiator hose. And it’s pretty useful; without it your readers would have to put down your book, go to the bookstore and find a book that explains the radiator hose replacement procedure.
Now, in the modern world of interlinked hypertext you wouldn’t need to include that section, you could just link to any of the numerous sources on the web that explain how to replace a radiator hose.
And, frankly, with this in mind you might realize there’s no point writing your book at all. Unless you are really discussing some aspect of auto repair that hasn’t already been covered in some other easily available source, you would really just be creating redundant information. And information these days, with the web, ebooks and such, is much more “easily available” than it’s ever been. (There’s is, admittedly, a challenge in searching through all that information for trustworthy and correct information, but with a little tenacity it’s doable.)
How about fiction? Certainly every fiction book is in some sense unique. But as I’ve mentioned, I’ve been doing a little work in the realm of book promotion these days and one thing I’ve noticed is that everybody and their dog has written a fantasy novel about a plucky band of dwarves/elves/humans that go off on a mission to free their land from the dark force that emanates from a great tower/mountain/city off in the distance. They’ve also all written novels about a hard nosed detective type with a flaw (alcoholism, self-loathing, pedophiliac tendencies) who has to go up against a serial killer of pure evil (and in the process redeem themselves.)
Are you really doing the world any kind of favor by writing these kinds of books? I would argue no. In both cases – redundant non-fiction and trite fiction – you’re basically creating more noise, more junk people need to wade through to get to the good stuff (like my work.)
So should people just stop writing altogether? Well, I doubt that’s going to happen. But I hope they consider what they are really adding to “the commons” before taking pen in hand.
October 2nd, 2013 by Wil
I continue to read David Cope’s “Computer Models of Creativity” which documents his process of creating computer software that can compose music. One point he makes is that context plays into how we respond to music. If we know a musician led a troubled, tragic life we imbue their music with a certain emotional resonance that might not really be there. Or, if we are told the music is about something meaningful, we hear meaning. Cope tells a story of composing a piece of music mainly as an exercise. He was then asked to compose a piece of music for a friend’s memorial service. Being short on time, he used the aforementioned composition. People at the memorial commented on the sadness and “funereal sense” the music provided, even though the music was written as an academic excercise.
In the book, Cope describes another contextual property of music: its uniqueness! He explains…
Since 1980, I have made extraordinary attempts to have Experiments in Musical Intelligence’s [his computer composition software] works performed. Unfortunately, my successes have been few. Performers rarely consider these works seriously. A friend of mine has noted the intimidating nature of the number of outputs possible from computer programs. Uniqueness, he feels, is an extremely important factor in human aesthetics. Knowing that my programs represent an almost infinite font of such works apparently renders them less interesting, no matter how beautiful and different from one another they may be. For many, knowing that I could restart my program at any time, and program a thousand more works, apparently lessens their interest in the one. … This sense of uniqueness is heightened by the fact that for human-created works at least, composers die.
Speaking to that last point, we see this all the time. Jimi Hendrix is alive and well and that 45 he recorded ten years back is worth X dollars. Suddenly he dies and it’s worth much more, even though it’s the same item it was a day previous.
And I think we all understand the general sense Cope is speaking of in that paragraph. It is why a handmade item is worth much more than a factory assembled item which may be of much sturdier construction. This is why people pay millions of dollars for a painting and 30 bucks tops for a poster.
But why does uniqueness drive value? Evolutionary psychology posits a general answer. Those who possess unique things are demonstrating their power and power is an aphrodisiac which increases your ability to pass on genes etc.
I wonder whether we are entering an age of computer produced art, music, film, fiction and what not, and whether that emergence of that age will deflate the market for creative products. I don’t simply ask whether we will pay less for the arts, but whether will we actually enjoy them less? Will knowing that the music we are listening to could have been created in a nanosecond by an artificial intelligence program (regardless of whether it actually was) deprive us of it’s pleasures?
In closing, I ask you to make note of my subtle yet dramatic use of italicization in this post.
October 1st, 2013 by Wil
Lately I’ve been reading a book called “Computer Models of Musical Creativity” by David Cope. Cope is a musician and programmer who has created software which composes classical music, usually within the style of existing historical composers. The method by which the software does this is complex – you basically have to read the book to understand it – but it does create “human sounding” music that is good, if not great.
One question I’ve had while reading the book is why Cope limits himself to (western) classical music. He explains…
Popular music, for the most part, relies on lyrics, particular timbres, performance context, and many other factors my program cannot control. The mere fact that we know most popular music by its performer, rather than its composer, should confirm the problem.
(Italics are Cope’s.)
He makes a good case, especially in regards to timbre. A song originally played on electric guitar but transferred to zither will not have the same impact. The electric guitar has a certain beefy, manly machismo that gets lost with other instruments. Cope’s point is that it’s not the notes themselves that drive, say, “Black Dog” by Zeppelin, but the notes combined with the guitar tone and various other factors and nuances of performance. On the flip side, Bach’s first invention is largely driven by the notes on the page (combined, one hopes, with a good performance.)
Nonetheless, I don’t see why Cope or some other programmer couldn’t create music that takes timbre into account. I could envision a music creation program that tracks trends in instrument timber and then predicts what will be next and generates some very hip music!
Here’s an example of some of Cope’s music. It’s a bit stiff as it is being rendered by a computer (as opposed to being played by a human (which it could be)), but gives you the picture.