Archive for the 'Technology' Category

Robots and Donald Trump

Let’s say it’s 1970 and you’re a man who just graduated from high school. You don’t come from wealth but you can get a job at a local factory, the same way your dad did, and make enough to raise a family. You have a son, and in 1995, he does the exact same thing. Just like his dad, he takes a factory job (or similar blue collar gig) and makes decent living. He has kids, starts a family etc.

Now it’s 2016. Five years ago, the son in that scenario was let go from his shuttered factory. He’s been looking for work ever since. Or maybe he’s given up and become an alcoholic. Either way he feels betrayed. The unspoken promise society gave him was that if he played fair and worked hard he could live a comfortable life and take care of his family. That promise has been betrayed. He blames Mexicans for coming in and driving down wages and trade agreements that close factories on American shores and open them in countries with cheap labor.

This, I suspect, is a Donald Trump voter.

My question is whether things are about to get worse.

First, let’s examine this guy’s presumptions. Are Mexicans and bad trade agreements what screwed him? Well, maybe, I don’t really know. But I don’t think they will be the problems faced by that guy’s kids. I suspect the problem of the future is technology: robotization and advanced software. You could even argue this is the problem of today.

…in a 2014 poll of leading academic economists conducted by the Chicago Initiative on Global Markets, regarding the impact of technology on employment and earnings, 43 percent of those polled agreed with the statement that “information technology and automation are a central reason why median wages have been stagnant in the U.S. over the decade, despite rising productivity,” while only 28 percent disagreed. Similarly, a 2015 study by the International Monetary Fund concluded that technological progress is a major factor in the increase of inequality over the past decades.

The bottom line is that while automation is eliminating many jobs in the economy that were once done by people, there is no sign that the introduction of technologies in recent years is creating an equal number of well-paying jobs to compensate for those losses. A 2014 Oxford study found that the number of U.S. workers shifting into new industries has been strikingly small: in 2010, only 0.5 percent of the labor force was employed in industries that did not exist in 2000.

The gist of this argument is that it won’t be Mexicans taking your job in the future (if they ever did), it will be robots.

This is a problem that I don’t really see any of the candidates talking about. Partly, I suppose, because there’s no obvious solution. People often talk about retraining people but as the text above notes, new jobs aren’t keeping up with jobs lost. And I think there’s a psychological component that makes people resistant to retraining. People associate their selves, their identities, with their job (sometimes a job that has run in their family for generations). When someone comes in and says, “OK, throw all that away and train to become a widget manufacturer,” people are resistant. They don’t want to give up their identities.

The future of plagiarism?

Here’s an interesting tale up on a British news site. In short, an author discovered that a couple books she had co-authored years ago had been directly plagiarized and republished under different titles. The theft was caught and the royalties that had been generated from Amazon sales were routed to the correct authors. All’s well that ends well.

The thief however was never really caught. He or she published under what was likely a pseudonym and no flesh and blood person can be connected to the name.

In a way, the scheme is rather obvious. There are millions of under-the-radar books out there—books that have been forgotten or never had much of a fan base. Why not publish them under a new title and try and reap the benefits? (Well, if you ignore the ethical reasons.)

I wonder if down the line it will be easy enough to get software to do all the dirty work. You design a piece of software to create thousands of fake accounts and then upload pilfered books to the Amazon store using these accounts? Maybe you don’t make much but something is more than nothing.

Do we already have “free education”?

Welp, I’m back to linking to a Scott Adams post on Trump, but this time because it hits on an argument I’ve made myself: with the plethora of free information on the internet we should rethink education. Adams notes…

Trump could take “free college” off the table by saying college is overrated for most people. You can learn almost any skill over the Internet, so what we need is a way to accredit certain collections of skills.

I’ve spent the last couple weeks watching my girlfriend’s son use youtube to educate himself on techniques for filming and lighting movies. There’s tons of useful info out there. But, of course, all his learning is meaningless unless it is verified somehow. This would be the accreditation idea Adams speaks of. Will we see a rise of accreditation institutions that do not teach – that would be up to the student – but simply verify that a person knows what they are talking about? I’d love to see that.

I will say, I’m dubious Trump can make that argument stick for the current Presidential race; people are too wedded to the old ways. But the idea itself has merit and may take hold.

The decline of librarians

Everyone acknowledges that the internet has radically changed things, even if we’re not quite aware of what those changes mean. I often state that the ease of access to information (and misinformation) that the web provides has big ramifications. For instance, I think the very notion of education and various credentials is weakening when so much information is online. It isn’t a matter of knowing something, but knowing where to find information.

The Wall Street Journal has an op-ed on the declining role of librarians as research helpers. It used to be you went to the librarian to have them look up obscure facts found only in arcane reference books found on dusty shelves; now you google it. As a result..

The mood among some librarians is pessimistic. A New Mexico librarian recently told me: “I spend most of my time making change and showing people how to print from the computer or use the copier. I sure don’t get the reference questions like I used to.”

Later the article makes an interesting point.

One bright spot: Some public libraries have created jobs for “technology assistants,” positions filled by tech-savvy young people with community-college degrees and plans for information-technology careers. Libraries can easily justify this new position: Techies are paid less than librarians or library associates and they offer skills the public increasingly needs. The public library of the future might be a computer center, staffed by IT professionals and few books or librarians.

Perhaps the Library will morph into a kind of IT Support center for the common man. Concerned about whether to upgrade this or that software? Wondering if someone stole your online identity? Go to the library.

Editing genes with CRISPR

There’s a recent Economist article on the advent of CRISPR technology which is a gene editing tool that could be used (at some indeterminate point in the future) to allow parents to design their offspring. Don’t want your kid to have your bad breath? Edit it out with CRISPR. Wan’t your kid to excel at music in a way you never did? Bring on the CRISPR.

The catch is that a lot of attributes interact in ways we don’t understand. It’s possible that making someone too intellectual limits their emotional life or that making someone too empathetic could paralyze them with anxiety*. The article points out an interesting fear, that parents intent on improving their kids could actually damage them.

* These are examples I just made up; I have no idea if they are real.

If CRISPR can be shown to be safe in humans, mechanisms will also be needed to grapple with consent and equality. Gene editing raises the spectre of parents making choices that are not obviously in the best interests of their children. Deaf parents may prefer their offspring to be deaf too, say; pushy parents might want to boost their children’s intelligence at all costs, even if doing so affects their personalities in other ways.

And let’s not forget the elephant in the room.

…if it becomes possible to tweak genes to make children smarter, should that option really be limited to the rich?

Artificial Intelligence writing fiction!

One topic I tackle occasionally is the idea of artificial intelligence programs writing content. They are already writing non-fiction news stories, but the big event will come when (not if) AI writes fiction stories. As reported here, a computer science institute is offering a prize for algorithms (the guts of AI) that can generate short stories.

The article’s author touches on something I’ve mentioned in the past: the idea that AI won’t exclusively write the stories, but rather partner with human authors. Imagine a program that spits out a basic story outline which is then massaged by a human author. I expect we’ll see this sort of thing as well as comparable efforts in the realm of music and perhaps some visual arts.

I should note, I’m not exactly happy with this state of affairs, but I have some wearied acceptance and could see myself playing with this kind of technology.

Is Amazon reinventing the world?

There seems to be a number of articles and op-eds coming out, like this one, implying that working for the internet company is a miserable experience. It entails long hours, ego-shattering criticism and the like. Employees are, according to a few pieces I’ve read, often seen crying at their desks.

I have a hard time getting too concerned. Generally I think that if you don’t like where you work, then quit.

Having said that, I’m rather bemused at some of the defenses of Amazon. They argue that the hyper-competive employee environment is necessary to for Amazon to do great things. As one high ranking employee quoted in the piece linked above says, “We’ve got our hands full reinventing the world.”

I’m sorry, what? How the fuck has Amazon reinvented the world? Because we can now order lots of shit from what is essentially a digital sales catalog? Cool, but not mind blowing. Books are now available on a mini computer (the Kindle?) This might qualify as “neato.” The company may soon be be delivering packages via drones which is interesting but will probably quickly become commonplace. But none of this stuff is really awesome. Curing cancer, that would be awesome. Determining whether string theory is correct, that would be awesome. Figuring out how to maximize everyone’s satisfaction would be awesome.

Delivering Kindles via drones… not so much.

Can AI be programmed to be moral?

There’s a blog called “Wait But Why” that does a good job of taking dense, philosophical topics and presenting them in an amusing but thought-provoking way. The author recently did a very long post on the kind of artificial intelligence technology that many think is around the corner (possibly only a few decades away.) He concedes that AI could radically reshape the course of history, possibly by bringing about a Utopia or possibly by causing humanity’s destruction.

In part II of the post the author gets into the more negative secenerios. A point he makes well is that AI wouldn’t end humanity because the AI is “evil” or out to get us, but rather that humanity’s extinction might just be a logical output of of badly coded commands. A simplistic example would be someone tells an AI to solve world hunger and the computer “thinks”, “Well, only living humans are hungry so if I kill them all, they can’t be hungry.” Again, a very simplistic example, but a much more convoluted one could occur and wipe us out.

So the challenge is that we have to program our morality into the AI. It needs to follow our moral rules. But this is interesting because, as I’ve pointed out before, we really don’t have a clear and concise ruleset. We think we do, the Golden Rule being a good example, but
the Golden Rule is really quite flawed
. It’s more true to say we have a general set of moral intuitions that we kind of follow sometimes. Hardly the sort of thing that can be fed to a computer program.

So maybe the great challenge for humanity right now is to create a purely logocal, objective set of moral rules. I personally suspect that is impossible,

Computer made comic art

I’ve mentioned that I’ve been drawing a lot of comic book style art lately. And I’m always talking about the notion that computers and robots will soon be taking away a lot of people’s jobs. Today I woke up and found myself thinking about how software could allow non artists to render decent looking comic book art.

We all know computers are great at numbers and calculations. So the question is, can we turn a scene in a comic book world into numbers and calculations? That is exactly what happens with the kind of 3D animation so prevalent in movies (and there we have the added complication of motion.) Basically, any object—a box, a cat, a person, a spaceship—can be reduces to a series of lines and curves and these can be rendered as numbers. So I can envision a software where you basically say, “show me an office space (basically the interior of a box) and put inside it a desk and a guy in a business suit.” Then you could rotate the scene around or move the guy, or move his arms at the joints (like and action figure etc.) From there the software could apply different styles to the way the lines are drawn (using a thin or thick pen for example, or using different kinds of shading techniques.)

This would go a long way towards allowing dilettantes to produced decent looking comic books. But would it also put real comic book artists out of work? I dunno… probably. At least it would make the job market tougher. I suspect what you would start to see would be more of a hybrid approach where some of the art is produced by software and some by an artists.

I’ve also mentioned the idea of software writing stories. I wonder if we will see in my lifetime the first comic book fully created by computer. At point we will know computers are our masters.

How does your self-driving car handle the Trolley Problem?

In the field of ethics, you often hear discussion of “The Trolley Problem“, a fictional scenario where a person is forced to chose the best outcome from a situation where at least one person is guaranteed to die. I stumbled across this web article which makes the case that the self driving Google car could bring the trolley problem to reality. If you car is headed into a crash should it sacrifice you to save bystanders?

How will a Google car, or an ultra-safe Volvo, be programmed to handle a no-win situation — a blown tire, perhaps — where it must choose between swerving into oncoming traffic or steering directly into a retaining wall? The computers will certainly be fast enough to make a reasoned judgment within milliseconds. They would have time to scan the cars ahead and identify the one most likely to survive a collision, for example, or the one with the most other humans inside. But should they be programmed to make the decision that is best for their owners? Or the choice that does the least harm — even if that means choosing to slam into a retaining wall to avoid hitting an oncoming school bus? Who will make that call, and how will they decide?

I would offer an additional moral question. If my car decides to sacrifice me can it be programmed to quickly and painlessly kill me as opposed to leaving me to the destruction of a car accident?