Archive for the 'Technology' Category
January 6th, 2014 by Wil
Over at Reason magazine, there’s an article contemplating the possibility of autonomous technology. This isn’t technology that is conscious (though plenty of people are contemplating that) but software and robots that exist as entities that can support themselves economically. The article muses on a self-driving car that operates as a cab and uses its income to pay for gas and repairs. Or investment software that buys and sells stocks; some invest-bots might make millions, some would go broke, but they would be out there.
The author states…
This little [invest-]bot can be made with technology that we have available today, and yet it is totally incompatible with our legal system. After all, it is a program that makes and spends money and acts in the world, but isn’t owned by a human or a corporation. It essentially owns itself and its capital. The law doesn’t contemplate such a thing.
It’s a fascinating idea—computer programs that are independent, money-making units. But if they are too successful, will the humans begin to eye them jealously? Will men seethe in anger when they discover millionaire robots taking their wives out for a night on the town? Are the seeds of the coming human/robot war being sowed as we speak?
December 27th, 2013 by Wil
For a while now I’ve been arguing that the digitization of content is leading to the devaluation of content. This has certainly affected the news industry (see dying newspapers and magazines) and the music industry (see the loss of revenue and the general sense that music is free.) I’ve long felt the movie industry will become a partial victim to this trend; the main thing sparing it right now is that files of burned movies are still pretty big in terms of data and thus unwieldy to upload/download etc.
That said I’ve been noticing a lot of pirated movies over at Youtube. I’ve even been passing some evenings checking them out; there’s a lot of great classic and independent horror flicks there. (I wrote about some of them here.)
The ethical problem is, of course, that these movies are pirated and the people who produced and contributed to the films see not one dime from their films being shown on Youtube. I am fortunately free of ethics but other folks may be bothered by this. You often find these “information should be free” types tying pretzels of logic to defend piracy.
One thing that I am struck by is the fact that Google—the owner of Youtube—is selling advertisements that run before the movies. So, not only are they looking the other way to movie piracy, they are profiting from it. I was a little curious about this state of affairs and tracked down this HuffPo article which examines it. As you might surmise, Google is protected because they are not legally responsible for what their users upload (which, I admit, is probably a fair position.) Nonetheless, this development clearly does not encourage the production of cinema, especially independent cinema.
The article states…
…according to judgments in the YouTube v Viacom case, the DMCA provides YouTube with protection against copyright infringements carried out by its users. YouTube is not under an obligation to ensure that the service it provides complies with copyright law. Instead, copyright owners must report infringement to YouTube. So rather than YouTube creating a clever piece of software, or employing a small team of compliance officers, studios and producers all over the world have to duplicate effort and cost to monitor whether their titles are being uploaded. It doesn’t look like this inefficient method of policing copyright is going to change any time soon, but things get really interesting when one looks at the money that is being made from illegal uploads. YouTube may not have a responsibility to ensure that the content it publishes complies with the law, but does that entitle it to derive revenue from illegally uploaded content?
Of related interest: this article alleging Google, Yahoo and Bing knowingly sell ads to spammers. So much for “do no evil.” But what are we going to do about it? Stop using Google? Of course not. We are their bitches.
December 23rd, 2013 by Wil
Al Goldstein, publisher of famous porn mag “Screw” died recently. He had an epic life arc; at one point he was a millionaire from “Screw”, but…
By the mid-2000s, Goldstein was homeless. He once got a job as a restaurant greeter, only to lose it when the management discovered he was sleeping on the premises.
Even in that period he reveled in irreverent humor. In 2005, when he worked at a New York deli, he told the Washington Post, “I’ve gone from broads to bagels.” But there was also the ever-present anger. “Anyone who wishes ill on me should feel vindicated,” he told the New York Times in 2004, “because my life has turned into a total horror.”
What brought the duke of porn down? The ubiquitous availability of porn once the internet appeared. I wonder if that is a sign of things to come for other industries —music, books, movies—that find their content becoming more and more digitized?
I discussed Al’s strange, public access talk show “Midnight Blue” a while back.
The general focus of the show was interviews, usually with female porn stars, though eventually non-porn guests like O.J. Simpson, Arnold Schwarzenegger, Gilbert Gottfried and Debbie Harry made appearances. Al’s primary interest was that of sexual technique; he would often throw out provocative, curse laden queries along the lines of “How do you like your pussy licked?” or “Is there anything wrong with me fucking a chick with my nose?” The porn royalty sat in the hot seat—too jaded to be shocked at Al’s questions—and offered serious answers. (One of my favorite “MB” moments occurred when Al asked Carol Connors, the “forgotten actress of Deep Throat” (and mother of current mainstream film actress Thora Birch!) whether she would ever perform bestiality. “No,” Carol demurred, “But I love animals!”)
December 22nd, 2013 by Wil
As a society, or species, (or whatever we are), we tend to laud forward thinking creative geniuses. When we find one, we hoist them onto a pedestal and treat them as an (to quote this article) “Übermensch [who] stands apart from the masses, pulling them forcibly into a future that they can neither understand nor appreciate.” This is true across all disciplines. Think of Einstein, Beethoven, Picasso, on and on.
So how does one become a genius? Clearly you have to innovate, to do something no one else has done. But there’s a catch here. You can’t be too innovative. You can’t be so ahead of the curve that nobody can really grasp what you’re saying or doing.
Let me propose a thought experiment. Jimi Hendrix travels to Vienna around 1750 and plays his music. Would he be lauded as a genius? Would his guitar playing be heard as the obvious evolution of current trends in music? No, he’d probably be regarded as an idiot making hideous noise and he might be burned at the stake.
But, let music evolve for around 220 years and yes, Jimi is rightfully regarded as a genius. His sound was made palatable by those who came before him, mainly electric blues guitar players of the 50s and 60s. (Obviously there are a lot of other factors (like race and class and sex) relevant to whom gets crowned a genius but I’m painting in broad strokes here.)
So the trick to being a genius is to be ahead of your time but not too ahead. The world of science and medicine is filled with examples. Gregor Mendel famously discovered that physical traits could be passed from one generation of life to another. In what was a major breakthrough in our understanding of biology, he theorized what we came to call genes. He published his results and was met with pretty much total indifference. It wasn’t until his work was rediscovered decades later that we applied them. Mendel was too ahead of his time.
The book “The Mind’s I” notes the mathematician Giovanni Girolamo Saccheri who contributed to the discovery of non-Euclidian geometry. His ideas were so controversial that even Saccheri himself rejected them! (At least he did according to the book; there seems some debate on this. See the last graph on the Saccheri wiki page.) Talk about being too ahead of your time.
But perhaps the best example of this sort of thing is Ignaz Semmelweis. The Hungarian physician…
…discovered that the incidence of puerperal fever could be drastically cut by the use of hand disinfection in obstetrical clinics.
That’s right, he basically came up with the crazy idea that doctors should wash their hands after touching sick people. Unfortunately…
Despite various publications of results where hand-washing reduced mortality to below 1%, Semmelweis’s observations conflicted with the established scientific and medical opinions of the time and his ideas were rejected by the medical community. Some doctors were offended at the suggestion that they should wash their hands and Semmelweis could offer no acceptable scientific explanation for his findings. Semmelweis’s practice earned widespread acceptance only years after his death, when Louis Pasteur confirmed the germ theory and Joseph Lister, acting on the French microbiologist’s research, practiced and operated, using hygienic methods, with great success.
Oh well. Semmelweis probably still had a great career and life right?
In 1865, Semmelweis was committed to an asylum, where he died at age 47 after being beaten by the guards, only 14 days after he was committed.
Don’t be too ahead of the curve folks.
December 2nd, 2013 by Wil
Most people are somewhat familiar with Photoshop, the image editing program that has turbo charged the graphic design industry. And I think most people are generally aware that Photoshop has what are called “filters”—tools with which one can take an ordinary photograph and turn it into a blurry, Monet style painting, or a pointillist masterpiece, or a piece of pop art a la Lichtenstein. Here, for example, is an example of a filter that gives an image a black and white comic book effect.
How these filters work is a bit beyond me but one can assume the processes are tailor made for computation. To consider one example, if we realize that a color’s saturation level can be assigned a number, we then realize that to make an image desaturated (in the style of a water color painting) we could create a computational rule like: for each pixel with a saturation value higher than [some threshold number], set that pixel’s saturation to -20 below its current value. Repeat until it’s below the threshold.
So computation and filters have radically affected what’s possible with visual art. It struck me today, why hasn’t this happened with music?
To some degree it has. With MIDI manipulation software, it’s quite easy to swap one synth sound out for another—to make what was initially a trumpet sound like a xylophone, for example. (I find in practice, however, it’s not quite so simple, as how you play a part is dependent in the timbre feedback you get while you play it. When swapping instruments I sometimes have to redo the part.)
You can also easily modulate a piece of music to a new key, so that a piece written in the key of A# can be moved up to C.
But it strikes me that there’s a number of other ways one could use computation tools to make music creation easier. All of the following commands are the kinds of things I would like to be able to request in a program such as Garage Band. I’m using musical terms here that may not be familiar to non-musicians, but I’ll try to keep it simple.
- Take all the block chords in the selection and turn them into 8th note arpeggios.
- Harmonize this melody line in thirds.
- Take my harmony and render it in the style of a ragtime piano. (I think this actually can be accomplished via the software “Band in a Box.”)
- Take all the instances of a minor chord that precedes a chord a fourth away and change them into dominant 7th chords. (This would have the effect of “jazzing up” the sound of a song. I vi ii V7 would become I VI7 II7 V&.)
These are all “surface level” examples – I can think of plenty of filter ideas that would apply on a more granular level.
My point being that this sort of thing is eminently possible; indeed, it has been for years. Maybe it’s out there and I’m unaware of it, but that would surprise me.
This would of course make the production of music much* easier, and enable the exploration of creative ideas with much less effort. That said, it’s valid concern that this might make the world of music much worse, creating an mob of middling Mozarts who could render listenable but fundamentally undisciplined music. (I would likely fall into this group.) It’s reasonable to argue that the path to compositional virtuosity should require a degree of effort to travel. But these concerns are exactly the sort of thing I think we’re going to be confronting soon enough anyway.
*I originally mistyped this word as “mush” which is ironic since such software tools might result in a lot of musical mush.
December 1st, 2013 by Wil
I’ve spent quite a bit of time here arguing that technologically enabled fast paced communication (e.g. email, twitter, texting, facebook etc.) has had the effect of making us easily distractible, constantly searching for the next “hit” of information. But this op-ed piece by the author of “The Shallows: What the Internet Is Doing to Our Brains” makes an good point: being easily distracted was our natural state for a long time.
Reading a long sequence of pages helps us develop a rare kind of mental discipline. The innate bias of the human brain, after all, is to be distracted. Our predisposition is to be aware of as much of what’s going on around us as possible. Our fast-paced, reflexive shifts in focus were once crucial to our survival. They reduced the odds that a predator would take us by surprise or that we’d overlook a nearby source of food.
It was only after we achieved levels of relative peace and security that we could focus in on things. It would not surprise me if the distracting presence of the interweb completely rolls back tens of thousands of years of human progress within a generation.
November 24th, 2013 by Wil
I’ve mentioned that I’ve been reading Jaron Lanier’s “You Are Not a Gadget,” a tome that bemoans (or should I say “a bemoaning tome”) the free economy which has overtaking music, much of writing (you aren’t paying for this blog post, for example) and possibly soon, movies. Last night I dug up some of Lanier’s various TV appearances on you tube. (I did not pay to view them of course.)
Fundamentally Lanier is getting at the question of how we valuate things. Obviously we’ve long used markets to do so, though they have always been affected by external manipulations e.g. tariffs, price setting, caps by government or industry on how much of something can be produced etc.
If we look at music we can note that music used to be worth something—generally about a dollar a song though that’s a flawed estimate— and now it’s worth much less. It’s hard to really say what a song is worth these days. I guess they still sell for 49 cents to 99 cents over at iTunes, but most people can dig up any song they want to hear on piracy sites or youtube or Spotify. I haven’t paid to listen to music for years unless I’m buying a friend’s music (and even then I grumble.)
Have markets decided that music has no value? It’s a bit more complex than that. Markets are dependent on the state to enforce the notion of private property. If I can just take want I want, markets really have purpose (at least to me, the person doing the taking.) The debate in the world of music right now is over what the product is an who owns it. If I buy a song, am I free to make a digital copy of it and send it to my friends? Technically, in the eyes of the law, no, but realistically, yes, insomuch that laws that aren’t enforced are worthless.
I tend to side against the “free information/piracy” types, but I do concede these are hard questions to answer. How can anyone really own what is essentially information on a computer?
And I’ll entertain even more Marxist thoughts. Let’s look at the realm of physical objects. A chair, say. Some guy cuts down a tree and makes a chair which I buy with my money. Did he really “own” that tree? Maybe it was on his land but how did he get that land? Did an ancestor of his take it from Indians who themselves had no real sense of ownership (since they were hunter-gatherer types who just wandered around)? At some point the earth had no intelligent creatures on it – who owned everything then?
On some level these are silly questions, but I think you get my point. The very premise of ownership of anything is somewhat shaky.
Anyway, Lanier is trippy to watch so I will include a video here.
November 21st, 2013 by Wil
I’ve just started reading a book that I’ve mentioned being interested in: Jaron Lanier’s “You Are Not a Gadget.” The book is something of a condemnation of aspects of modern Internet culture, made all the more damning by the fact that Lanier is technologist who played a role the development of the web. Many of the “pro-Internet” views he takes on belong to good friends of his.
One argument he makes is that eccentricity—the expression of unique behaviors and ideas—is being removed from modern culture. Part of this is because of the mob-like nature of Internet comments sections. As I have noticed, in many Internet forums a consensus view often develops among the participants. Those who express opinions different from this view are either mocked or ignored (as I have been until I gave up on opinion forums.) People tow the party line and are not exposed to ideas that may challenge their views. And, as has been well commented on, people gravitate towards blogs and sites that correspond to their world view, further isolating their thought processes.
(Related to this: I once argued that the fluid communication the web enables makes one realize just how hard it is to be unique.)
Lanier also sees individuality taking a hit on social networking sites like Facebook. In the mid 90s people defined themselves on the web via home pages, many of which were housed on now deceased hosting site geocities. I remember these pages and you probably do too. They were often amateurish in design and usually had god-awful background tiles that made text unreadable. But they had personality. It was hard to confuse one person’s home page for another’s. The same is not true with Facebook—most people’s pages look basically the same. (Yes, you get your own header but that’s not much.)
Now the fact that everyone’s Facebook pages look similar is hardly the greatest calamity facing society. But I get Lanier’s point. It’s one more chip away from the idea of individuality, of personality. The Internet is not encouraging individuation, but a borg-like assimilation into a mono culture. I predict this will cause the death of all humanity within 20 years.
October 27th, 2013 by Wil
In some quarters that seems to be the perception. Life is getting uglier and more unstable, global violence more pandemic etc. In “How to Create a Mind,” Ray Kurzweil’s new book, Kurzweil notes that…
…a Gallup poll released on May 4, 2011, revealed that only “44 percent of Americans believe that today’s youth will have a better life than their parents.”
Why is this? Kurzweil offers an interesting explanation, one that mirrors arguments I’ve made. There’s just a lot of information flying around overwhelming people. Kurzweil writes:
A primary reason people believe that life is getting worse is because our information about the problems of the world has steadily improved. If there is a battle today somewhere on the planet, we experience it almost as if we were there. During World War II, tens of thousands of people might perish in battle, and if the public could see it at all it was a grainy newsreel in a theater weeks later. During World War I a small elite could read about the progress of the conflict in a newspaper (without pictures). During the 19th century there was almost no access to news in a timely fashion for anyone.
In short, people were blessedly ignorant.
Interestingly, neither Kurzweil nor America’s other great mind – myself – are the first to comment on this problem. In an article entitled “Only Disconnect” in the October 26 New Yorker, the German theorist Siegfried Kracauer is quoted. In 1924, he said…
A tiny ball rolls toward you from very far away, expands into a close-up, and finally roars right over you. You can neither stop it nor escape it, but lie there chained, a helpless little doll swept away by the giant colossus in whose ambit it expires. Flight is impossible. Should the Chinese imbroglio be tactfully disembroiled, one is sure to be harried by an American boxing match… All the world-historical events on this planet—not only the current ones but also past events, whose love of life knows no shame—have only one desire: to set up a rendezvous wherever they suppose us to be present.
In 1924 people though the news was roaring over them! This guy’s head would have exploded if he saw Sean Hannity or Rachel Maddow.
October 20th, 2013 by Wil
A recurring theme on this blog is my contention that medical care in this country (and probably a large part of the first world) is a joke. As I argued here, Doctors are incentivized to offer or order care that may not be actually needed.
Recently I stumbled across an op-ed piece (written by a Dartmoth professor who has a book out entitled “Overdiagnosed.”) It adds some interesting information to this whole debate. In describing the analysis of one doctor who examined how medical care is dispensed, the article states…
Jack went on to document similarly wildly variable medical practices in the other New England states. But it wasn’t until he compared two of the nation’s most prominent medical communities — Boston and New Haven, Conn. — that the major medical journals took notice. In the late 1980s, both the Lancet and the New England Journal of Medicine published the findings that Boston residents were hospitalized 60% more often than their counterparts in New Haven. Oh, by the way, the rate of death — and the age of death — in the two cities were the same.
So, two populations were getting quite disparate amounts of medical care but were in the same state of health. Observations such as this led the development of medical care epidemiology, the science of studying the effects of medicine.
Medical care epidemiology examines the effect of exposure to medical care: how differential exposure across time and place relates to population health outcomes. It acknowledges that medical care can produce both benefits and harms, and that conventional concerns about underservice should be balanced by concerns about overdiagnosis and overtreatment. Think of it as surveillance for a different type of outbreak: outbreaks of diagnosis and treatment.