Archive for the 'Technology' Category
September 10th, 2015 by Wil
There’s a recent Economist article on the advent of CRISPR technology which is a gene editing tool that could be used (at some indeterminate point in the future) to allow parents to design their offspring. Don’t want your kid to have your bad breath? Edit it out with CRISPR. Wan’t your kid to excel at music in a way you never did? Bring on the CRISPR.
The catch is that a lot of attributes interact in ways we don’t understand. It’s possible that making someone too intellectual limits their emotional life or that making someone too empathetic could paralyze them with anxiety*. The article points out an interesting fear, that parents intent on improving their kids could actually damage them.
* These are examples I just made up; I have no idea if they are real.
If CRISPR can be shown to be safe in humans, mechanisms will also be needed to grapple with consent and equality. Gene editing raises the spectre of parents making choices that are not obviously in the best interests of their children. Deaf parents may prefer their offspring to be deaf too, say; pushy parents might want to boost their children’s intelligence at all costs, even if doing so affects their personalities in other ways.
And let’s not forget the elephant in the room.
…if it becomes possible to tweak genes to make children smarter, should that option really be limited to the rich?
August 25th, 2015 by Wil
One topic I tackle occasionally is the idea of artificial intelligence programs writing content. They are already writing non-fiction news stories, but the big event will come when (not if) AI writes fiction stories. As reported here, a computer science institute is offering a prize for algorithms (the guts of AI) that can generate short stories.
The article’s author touches on something I’ve mentioned in the past: the idea that AI won’t exclusively write the stories, but rather partner with human authors. Imagine a program that spits out a basic story outline which is then massaged by a human author. I expect we’ll see this sort of thing as well as comparable efforts in the realm of music and perhaps some visual arts.
I should note, I’m jot exactly happy with this state of affairs, but I have some wearied acceptance and could see myself playing with this kind of technology.
August 19th, 2015 by Wil
There seems to be a number of articles and op-eds coming out, like this one, implying that working for the internet company Amazon.com is a miserable experience. It entails long hours, ego-shattering criticism and the like. Employees are, according to a few pieces I’ve read, often seen crying at their desks.
I have a hard time getting too concerned. Generally I think that if you don’t like where you work, then quit.
Having said that, I’m rather bemused at some of the defenses of Amazon. They argue that the hyper-competive employee environment is necessary to for Amazon to do great things. As one high ranking employee quoted in the piece linked above says, “We’ve got our hands full reinventing the world.”
I’m sorry, what? How the fuck has Amazon reinvented the world? Because we can now order lots of shit from what is essentially a digital sales catalog? Cool, but not mind blowing. Books are now available on a mini computer (the Kindle?) This might qualify as “neato.” The company may soon be be delivering packages via drones which is interesting but will probably quickly become commonplace. But none of this stuff is really awesome. Curing cancer, that would be awesome. Determining whether string theory is correct, that would be awesome. Figuring out how to maximize everyone’s satisfaction would be awesome.
Delivering Kindles via drones… not so much.
July 27th, 2015 by Wil
There’s a blog called “Wait But Why” that does a good job of taking dense, philosophical topics and presenting them in an amusing but thought-provoking way. The author recently did a very long post on the kind of artificial intelligence technology that many think is around the corner (possibly only a few decades away.) He concedes that AI could radically reshape the course of history, possibly by bringing about a Utopia or possibly by causing humanity’s destruction.
In part II of the post the author gets into the more negative secenerios. A point he makes well is that AI wouldn’t end humanity because the AI is “evil” or out to get us, but rather that humanity’s extinction might just be a logical output of of badly coded commands. A simplistic example would be someone tells an AI to solve world hunger and the computer “thinks”, “Well, only living humans are hungry so if I kill them all, they can’t be hungry.” Again, a very simplistic example, but a much more convoluted one could occur and wipe us out.
So the challenge is that we have to program our morality into the AI. It needs to follow our moral rules. But this is interesting because, as I’ve pointed out before, we really don’t have a clear and concise ruleset. We think we do, the Golden Rule being a good example, but
the Golden Rule is really quite flawed. It’s more true to say we have a general set of moral intuitions that we kind of follow sometimes. Hardly the sort of thing that can be fed to a computer program.
So maybe the great challenge for humanity right now is to create a purely logocal, objective set of moral rules. I personally suspect that is impossible,
June 22nd, 2015 by Wil
I’ve mentioned that I’ve been drawing a lot of comic book style art lately. And I’m always talking about the notion that computers and robots will soon be taking away a lot of people’s jobs. Today I woke up and found myself thinking about how software could allow non artists to render decent looking comic book art.
We all know computers are great at numbers and calculations. So the question is, can we turn a scene in a comic book world into numbers and calculations? That is exactly what happens with the kind of 3D animation so prevalent in movies (and there we have the added complication of motion.) Basically, any object—a box, a cat, a person, a spaceship—can be reduces to a series of lines and curves and these can be rendered as numbers. So I can envision a software where you basically say, “show me an office space (basically the interior of a box) and put inside it a desk and a guy in a business suit.” Then you could rotate the scene around or move the guy, or move his arms at the joints (like and action figure etc.) From there the software could apply different styles to the way the lines are drawn (using a thin or thick pen for example, or using different kinds of shading techniques.)
This would go a long way towards allowing dilettantes to produced decent looking comic books. But would it also put real comic book artists out of work? I dunno… probably. At least it would make the job market tougher. I suspect what you would start to see would be more of a hybrid approach where some of the art is produced by software and some by an artists.
I’ve also mentioned the idea of software writing stories. I wonder if we will see in my lifetime the first comic book fully created by computer. At point we will know computers are our masters.
June 19th, 2015 by Wil
In the field of ethics, you often hear discussion of “The Trolley Problem“, a fictional scenario where a person is forced to chose the best outcome from a situation where at least one person is guaranteed to die. I stumbled across this web article which makes the case that the self driving Google car could bring the trolley problem to reality. If you car is headed into a crash should it sacrifice you to save bystanders?
How will a Google car, or an ultra-safe Volvo, be programmed to handle a no-win situation — a blown tire, perhaps — where it must choose between swerving into oncoming traffic or steering directly into a retaining wall? The computers will certainly be fast enough to make a reasoned judgment within milliseconds. They would have time to scan the cars ahead and identify the one most likely to survive a collision, for example, or the one with the most other humans inside. But should they be programmed to make the decision that is best for their owners? Or the choice that does the least harm — even if that means choosing to slam into a retaining wall to avoid hitting an oncoming school bus? Who will make that call, and how will they decide?
I would offer an additional moral question. If my car decides to sacrifice me can it be programmed to quickly and painlessly kill me as opposed to leaving me to the destruction of a car accident?
May 28th, 2015 by Wil
A while back I read some blog post by a guy describing a friend of his who still bought CDs. The guy did this because he believed that the act of curation was part of what made the music special for him. It wasn’t enough to have a vast collection of music at his fingertips (as anyone who has access to the web does), he wanted to have a relationship of sorts with the music. He wanted o purchase the CD, to eagerly read its jacket, to place the CD on and listen to the music, determining which were his favorites etc. I get the point though I think that kind of fetishization is a little fruity.
But there is something that I think has occurs when you have the massive digitization of music albums: each individual album becomes less valuable. Not just in a financial sense, but in a harder to define personal sense. I can remember as a kid that certain albums had a strong cache. “Sgt. Peppers” would be one, as would Pink Floyd’s “The Wall.” These albums were almost legendary in certain circles. I’m sure fans of hip hop or heavy metal or various other music genres can point to similar examples of their own. And additionally, when I was a kid, I would find certain unknown albums that I came to love and they became personal favorites of mine. (A bizarre album by the group Zodiac Mindwarp and the Love Machine was one.) This music had great personal value to me.
And I wonder of that sort of thing is disappearing. Because music can be pertained with so little effort is music losing not just financial but personal value? The guy buying CDs above is sort of forcing himself to maintain the previous value of music (personal and financial), even if the rest of the world has moved on.
This is counter to the consumer oriented forces of “more is better” who argue that the cheapening of things can only be good for people. And I suspect they’re basically right in terms of food and other basic needs. But not so much in regards to objects of personal fetishization. The valuation of such things has always been an ethereal process—exactly why a culture values one album of music over another is unclear (especially since music really has no purely utilitarian value the way food or shelter does.)
It’s a mystery.
May 20th, 2015 by Wil
Digital music is a topic I occasionally discuss around here. And writing about the topic abounds on the web, often tackling the issue of how music producers can earn a living making music while music consumers can enjoy music cheaply (because otherwise they will resort to piracy.) Spotify is often portrayed as a hero or villain as are a few other similar streaming music services that pay little money to musicians.
I feel that unless the writing on the topic mentions Youtube it’s missing the elephant in the room. When I want to hear a piece of music my first choice is always to see whether it’s on youtube. I’m seldom disappointed.
Much of this music is obviously pirated. (There’s lots of pirated moves as well.) Some guy uploads his favorite music to youtube and it’s there for all to hear. Additionally, he can show advertising with the video and split the revenue with Youtube (owned by Google.)
I’ve longed wondered whether something similar could happen with books as the ebook format (the book equivalent of an mp3) becomes more popular. According to the GoodEReader site, it’s happening.
Google Play Books is quickly becoming a den of iniquity and a veritable cesspool of piracy. It is ridiculously easy for someone to start a publishing company and upload thousands of pirated books and piggyback on the success of established authors. Google won’t do anything about the pirated copies and has even told authors inquiring about their illegitimate books that they have to contact the publisher. It is a vicious cycle and so far Google Play Books is firmly endorsing piracy.
If you casually browse the Google Play Books section, it is fairly easy to find all of the modern bestsellers, at a fraction of the price. This includes pirated copies of the entire 50 Shades trilogy by E.L. James, all seven Harry Potter books, or even George RR Martin’s A Song of Ice and Fire series – all bundled together and sold alongside legitimate content offerings.
Google made the following statement to Good e-Reader when asked about the rampant piracy issue on Play. “Google Play takes copyright seriously. We take swift action when we receive a DMCA complaint, which the copyright holder can complete here. Additionally, we’re constantly improving our systems to provide a better experience.”
It honestly does not seem like Google is taking piracy seriously at all. They do not have cover art algorithms that cross-reference newly published content with an original author. Not does it employ any methods to scan for ISBN numbers and reference it against the Open Library or any other mainstream database.
UPDATE: Another blogger presses Google on the issue. The results?
When I asked what Google was doing to fight piracy in Google Play Books, they were unable to name a single activity. When I asked what it would take to get a commercial ebook pirate banned from Google Play Books, the Google rep was unable to even confirm that they would even ban a pirate after dozens of valid DMCA notices. When I asked what improvements they planned to make, none came to mind.
April 21st, 2015 by Wil
I pause to ask my readers a question. Are any of you considering uploading your mind into a computer? I think you should be aware of some potential problems.
The idea might sound crazy, but the possibility of such a thing is oft-discussed by scientists and psychologists who think it may be a real possibility in coming decades. How would such a thing work? First let’s consider what is probably the now mainstream view of the mind. The mind, this view advocates, essentially arises out of the complex, dense circuitry that is the human brain. (Each “circuit” could be thought of as an individual neuron of perhaps group of neurons that perform the same basic function like moving your index finger.) According to this view (which I basically subscribe to) your mind is your brain.
Now, if we could map out a person’s brain network down to very small details—and we seem to be getting closer and closer to this—we could then program that network into a computer and thus recreate that person—their personality, their essence—on a computer. And that person could conceivably live forever.
There are a couple problems so far. One being is that you aren’t really uploading your consciousness to a computer as much as you are simply cloning your mind. That consciousness—the uploaded mind—will live forever. The flesh and blood you will still eventually die as flesh and blood does. Also, it’s still unclear how our subjective consciousness arises our of our complex neural machinery. I could program a robot to respond to the wavelength of light we call red, but would it “see” red in any way comparable to the way we see red? It’s that perception that is really the magic of living. Would an uploaded mind possess this subjective magic or would it merely be a very complex robot? I don’t think anyone can authoritatively say.
Now let’s consider another view of the mind, this one advocated by philosopher David Chalmers among others. This view advocates that the mind extends beyond the realm of the brain into the rest of the physical world. To grasp this notion, take stock of your experience right now. You are seeing things, probably hearing things, maybe tasting and smelling things if you’re reading this over lunch. Your experience, your mindstate, would be very different without this particular outside stimuli. So, in a sense, this stimuli is part of your mind.
Here’s another way to think about it. The more popular “your-brain-is-all-you-are” theory I first mentioned says that your brain arises based on various electrical signals zipping through the circuitry of your brain. But what happens when I look at an apple. Photons bounce off the apple into my brain which results in the firing of neurons that somehow result in the subjective experience of seeing the apple. Is not this pathway of photons going from the apple to my eye similar to the pathway of a firing neuron. So is not every outside component (the apple, photons bouncing off it, etc.) part of my mind?
If Chalmers is on to something then we have a problem with mind uploading. If we upload only the brain part of your mind, not the external environment, we are only uploading part of the mind.
Now, maybe this could be solved. Maybe sensors could be created that would duplicate our senses, even augment them. For example, you could have some chemical sensor that, when provided cheese, fired the neural circuits in the uploaded mind that correlate to the neurons that fired when tasting cheese. But this idea seems a lot more complex than the already vastly complex task of uploading a mind to a computer.
April 11th, 2015 by Wil
As smart as computers are, they’re dumb in many ways. For instance, they have a hard time identifying objects in their field of vision. (Their “vision” of course being information sensed by various electronic sensors.) Even though humans can see objects without any effort*, computers stumble on this basic task.
*Actually, even human identification of objects is not flawless. Just yesterday I was looking around for my coffee mug and I realized it was right in front of me. I was staring directly at it, and just had trouble separating it from everything else on the kitchen counter
I’ve been reading a bit about a new process in computers called “deep learning” that is making computers much smarter. (Link goes to wikipedia article.) So much smarter that they are now able to be trained to recognize objects in image files even more accurately than humans. You can present a computer with 20,000 images and ask it to show you all the images with a cat and it will do so. You can then ask specifically for cats with pointy ears and it will do so.
This is pretty interesting when you think about it. How does a computer – a dumb, soulless computer – know to filter out dogs or chipmunks when it shows cats? I imagine it’s categorizing objects by pretty precise categories. A cat’s nose varies from a chipmunk’s in terms of nose size related to the whole face, cats have specific ear types etc. These must be the kinds of properties computers are using to separate cats from other objects.
An obvious and interesting next step would be to have computers create their own images of cats. (Essentially the computer would become an artist.) And perhaps “encourage” them to highlight different properties of cats over other, thus developing an aesthetic style. I’ve talked in the past about computers making art (it’s been going on for a while.) I suspect deep learning will speed things up even more on this front.
Of course, as computers get smarter humans grow anxious. Will they take our jobs? Jeremy Howard, a deep learning architect, has concerns.
Is this a good thing or a bad thing? It just depends how it’s used. It could be a wonderful thing, because it could allow us to spend our time doing the things we want to do rather than the things we have to do, which is, I think, what humanity has been aiming at for thousands of years. But on the bad side, that by definition puts people out of jobs. Eventually, it puts everybody out of a job.
If we remove the idea of the soul, at some point in history [there's nothing that] computers and machines won’t be able to do at least as well as us. We can argue about when that will happen. I think it will be in the next few decades.
What happens when the amount of things that can’t be automated is much smaller than the amount of people that exist to do them? That’s this point where half the world can’t add economic value. That means half the world is destitute and unable to feed themselves. So we have to start to allocate some wealth on a basis other than the basis of labor or capital inputs. The alternative would be to say, “Most of humanity can’t add any economic value, so we’ll just let them die.”