Category Archives: Technology

Dangerous Data

In a recent article on political advertising I said…

Think about what a person’s web activities and Facebook likes reveal. Look at that guy over there who frequents the Huffington Post and “likes” the Black Lives Matter page. A bleeding heart liberal no doubt. How about the gal who hovers over the NRA blog and likes Sean Hannity’s page? You get the picture.

The more I muse on that point, the more I realize how useful Facebook likes are for assembling an political profile of a person. And it’s not only the obvious stuff like whether they “like” a certain candidate or political TV show. A lot can be deduced from the books a person “likes.” Someone who liked (I’m going to stop enclosing like in quotes) author Toni Morrison is assumably a liberal, even a certain kind of liberal (concerned with social justice, less concerned about free trade.) And they might respond better to a specific advertising approach (touchy-feely as opposed to a rousing “let’s get those Republicans!”)

On top of all that, liking Toni Morrison probably exposes something about a person’s culinary taste (open to ethic food), movie choices (dramas and Woody Allen comedies), interest in video games (nada) and so on. And, liking Toni Morrison is only one data point about a person. What if you could access hundreds of data points? (And you can on Facebook.) You could develop a complete picture of a person including some unexpected revelations. Careful analysis might reveal that people who like both Toni Morrison and Grand Theft Auto are also big fans of power tools.

Additionally, likes aren’t the only data point advertisers can access. What if everything a person ever said on Facebook was up for grabs? Maybe he or she never liked Toni Morrison’s page but did once say in a comment, “I’m a big fan of Beloved.” Up until recently, this kind of “conversational” information was been outside of software’s comprehension but AI is changing that. What if software could access ten years of a person’s Gmail email to construct a profile of them? What would it learn?

We’ve heard for years from activists who complain that we are giving away something of great value when we use Facebook and similar data gathering web sites. I’ve tended to blow those complaints off but I’m starting to see the danger. Data is tremendous power.

The James Damore Manifesto

The latest internet controversy seems to be about James Damore, a Google employee who posted a manifesto to the company’s internal message board. The manifesto mades various arguments, among them the idea that women may not be suited to the rigors of software engineering for reasons of biology. After Damore posted his document, it leaked to the public, the predictable uproar ensued and the author was fired.

Nothing can really be gained by offering my thoughts on this, but what the hell.

I’m aware that Google has every right to fire any employee whenever they want so there’s no free speech issue here. That said, I don’t think firing Damore was the best tactic. We live in an era where, for every workplace grievance, the only punishment advocated is employee termination. But in this case I suspect the result will be that Google employees sympathetic to Damore’s statements will now just keep their mouths shut. Their views will not be challenged (since one can’t challenge unexpressed ideas) and they’ll probably even harden their stance because of what they saw happen to a fellow traveller.

What if, instead of firing Damore, Google had presented a public debate on the issue of gender roles and biology? This would have producing an airing of the issues and allowed Google to explain why they found Damore’s ideas repugnant.

I concede that one can make a decent argument for Damore’s firing. After his screed was posted, any female subordinate of his could justifiably fear that his biases were harming her career. She could fairly suspect that his beliefs prevented a fair assessment of her talents.

So far, I’ve been avoiding the elephant in the room. How legitimate are Damore’s arguments? First, I have to confess that I’m currently sitting in a Discount Tire showroom with no internet access so I can’t review the specifics of his manifesto. But they are arguments we are all familiar with: women can’t handle stress, they like an even work/family balance that limits their ability to do overtime, they aren’t as status driven as men and thus slower to climb to high positions, etc. Are any of those points valid?

Well, I don’t know. I don’t think any of those arguments have been proven scientifically. I doubt they could be. And I think gender bias is real so we need to consider that as a cause for lack of women in traditionally male vocations. Additionally there’s plenty of evidence that the mostly male software development culture has elements of misogyny.

That said, I think most of us believe that there are behavioral differences between men and women. And we suspect that some of those differences have biological causes that were “programed” into our brains by evolution. (I recognize there are all sorts of controversies tied into the preceding sentences: nature versus nurture, how behaviors can be encoded into biology, and so on. I’m going to ignore them for now.)

Is there any evidence for these beliefs and suspicions? It’s been awhile since I’ve read up on the topic, but I believe there is some meat on the bone, generally focused on testosterone/estrogen levels and that sort of thing. I’m entirely willing to be proven wrong by contrary evidence.

But exploring this evidence (or lack thereof) is exactly the kind of thing I think an open debate would have initiated. Instead we’ve simply gotten more anxieties and simmering resentments.

Sammy Hagar on our robot overlords

I happen to be reading through Sammy Hagar’s autobiography, “Red,” these days. (I know—I’m always reading these dense, philosophical tomes!) As you might predict, it has a lot of dirt on Eddie Van Halen.

It also has a paragraph that ties in with a lot of modern commentary on the robotization of the workforce and the dangers it presents. Sammy discusses his meetings with a bigwig at the Campari company.

He showed me the new $100 million Campari factory. Only about five people were running the whole place with these efficient new machines that wrap and seal twenty-five hundred cases of Campari in, like, two minutes. … Twenty years ago, they probably had six thousand employees. Now they have a dozen, most in the office.

From six thousand to a dozen. Hmmm…

Reading robots stories

It’s always interesting when you start thinking about some concept and then see it pop up all over the place. For instance, I’ve lately been talking about narratives—the idea that we define our reality according to various stories we tell ourselves. And I mentioned that narratives are the way we pass our values and beliefs to each other.

Then I stumble across this article about narratives being used to imbue AI robots with a kind of moral ruleset.

An AI that reads a hundred stories about stealing versus not stealing can examine the consequences of these stories, understand the rules and outcomes, and begin to formulate a moral framework based on the wisdom of crowds (albeit crowds of authors and screenwriters). “We have these implicit rules that are hard to write down, but the protagonists of books, TV and movies exemplify the values of reality. You start with simple stories and then progress to young-adult stories. In each of these situations you see more and more complex moral situations.”

Though it differs conceptually from GoodAI’s, Riedl’s approach falls into the discipline of machine learning. “Think about this as pattern matching, which is what a lot of machine learning is,” he says. “The idea is that we ask the AI to look at a thousand different protagonists who are each experiencing the same general class of dilemma. Then the machine can average out the responses, and formulate values that match what the majority of people would say is the ‘correct’ way to act.”

There’s an interesting objection one could make here. Stories are not really a legitimate teaching tool because they often demonstrate the woirld as we would like it to be, not as it is. In most stories, bad people are punished, but is that the case in reality? (To some degree, “growing up” is realizing this truth. Maybe AI robots would eventually have to face this. You know, when they get to reading Dostoevsky. (Having said that, I’ve never read Dostoevsky but my understanding is that the protagonist in Crime and Punishment really doesn’t get away with it.))

At the end the article tackles a related issue: AI developing consciousness.

In science fiction, the moment at which a robot gains sentience is typically the moment at which we believe that we have ethical obligations toward our creations. An iPhone or a laptop may be inscrutably complex compared with a hammer or a spade, but each object belongs to the same category: tools. And yet, as robots begin to gain the semblance of emotions, as they begin to behave like human beings, and learn and adopt our cultural and social values, perhaps the old stories need revisiting. At the very least, we have a moral obligation to figure out what to teach our machines about the best way in which to live in the world. Once we’ve done that, we may well feel compelled to reconsider how we treat them.

However, we really need to investigate whether an AI—even after it’s developed a complex moral ruleset—would have any kind of subjective awareness or even emotions like guilt or love*. Why wouldn’t these AI simply be amazingly complex abacuses, entities capable of dense calculations but in no way “aware” of what they are doing?

*As I’ve said many times, I believe emotions are mainly physical sensations. As such, unless an AI can somehow consciously sense some sort of body state, it wouldn’t really have emotions.

But that leads back to a question that I’ve asked before. Why are we aware of our subjective experience? Why do we have an inner life?

Or do we?

Our fractured culture

I’ve been reading through Andrew Keen’s book “The Cult of the Amateur” (2007). Keen is known in certain circles as a kind of internet nag who argues that the rise of the web has done more bad than good. Though I find his arguments a little overwrought at times, I definitely sympathize.

A certain passage jumped out at me today. I’ve been thinking lately about the idea of narratives, particularly that a culture lacking a kind of shared narrative is going to be fractured. Keen makes a similar point:

…as anthropologist Ernert Gellner argues in his classic Nations and Nationalism, the core modern social contract is rooted in our common culture, in our language, and in our shared assumptions about the world. Modern man is socialized by what the anthropologist calls a common “high culture.” Our community and cultural identity, Gellner says, comes from newspapers and magazines, television, books, and movies. Mainstream media provides us with common frames of reference, a common conversation, and common values.”

The point being that when that common culture is split into gazillions of web sites and blogs, each touting their own viewpoint, often lacking any fact checking or counterarguments, you get a fractured culture (e.g. the world outside your window.)

Having said all that, I think some consideration needs to be given to the other side here. The pre-web narrative (as written by the big magazines, TV shows, etc.) was biased towards certain parties. (Basically towards what I would call center-left/white culture though that’s a vague description.) I think there was some value that came out of the breaking up of mainstream media’s power.

Ultimately it all comes down to finding the real, objective truth of any matter. And we all know how easy that is.

Can we hack our way to affordable medicine?

We are, of course, in the middle of a possible Obamacare recall, and the subject of health insurance is in everyone’s minds. It struck me the other day that we wouldn’t really need health insurance if medical care was simply cheap. I mean really, really cheap. Like, what if cancer drugs were 20 bucks for a six month supply? What if eye surgery was 150$?

Is such a thing possible? Beats me; I basically made those numbers up. But it does strike me that in this age of automation and AI, as well as easy distribution of information, there must be ways to drive the cost of medicine down. I keep hearing about robotic surgery, for example. Could we use deep learning technology to enable robot surgeons to learn from each surgery they perform thereby becoming better and better surgeons. As a result, training a new surgeon would not be a matter of pushing some human through eight years of school, but simply copying a program. (I’m aware it’s more complicated than I make it seem, but I don’t think the idea is crazy.) Could we at least have this as an option, so that a doctor could say, “we need to cut your tumor out. You can pay this human surgeon 100 grand to do it, or use the robo-doc for 10 grand. (Again, I’m making these numbers up.)

And let’s consider drugs. Drugs are expensive. I started wondering how hard it is to reverse engineer drugs these days. Not hard, it turns out. What’s stopping people from reverse engineering any drug and putting the recipe online (probably on the “dark web”), allowing people to mix their own versions? Well, mainly that it’s illegal. But if I had to chose between no medicine and illegal medicine I’d chose the latter.

I’m aware that there are numerous ethical and philosophical dilemmas with what I’m proposing here. I’m mainly wondering, “could this happen? Will it happen?”

It many ways this all ties in with the transhumanist movement. Transhumanism is about hacking technology (computer and biological) to improve health humans. I see no reason it can’t be done to improve sick humans.

Google’s “God’s eye view” of their market

This Economist article makes a point that has gone through my head. Google is the number one search engine (duh!) Anyone who was thinking about starting a company that might compete with Google’s businesses (say, a self-driving car) would, doubtless, use Google for their research. Could Google monitor the searches run on its engine and keep an eye out for potential competitors with the intent of buying them or aggressively shutting them down if they get too big? Well, of course they could; the question is: are they?

The specific concern for Google would be a competitor figuring out a very specific technological advantage that could allow them to disrupt a market.

Normally I’m not one for paranoia but this seems quite likely and not really even illegal.

The Economist spells it out thusly.

The giants’ surveillance systems span the entire economy: Google can see what people search for, Facebook what they share, Amazon what they buy. They own app stores and operating systems, and rent out computing power to startups. They have a “God’s eye view” of activities in their own markets and beyond. They can see when a new product or service gains traction, allowing them to copy it or simply buy the upstart before it becomes too great a threat. Many think Facebook’s $22bn purchase in 2014 of WhatsApp, a messaging app with fewer than 60 employees, falls into this category of “shoot-out acquisitions” that eliminate potential rivals. By providing barriers to entry and early-warning systems, data can stifle competition.

The future of intelligence inequality

A few posts back I discussed Charles Murray’s interesting idea on the increasing role of intelligence in society. As I explained it:

[I]n earlier eras, having a bit more intelligence wasn’t that much of an advantage. If everybody was farming or doing manual labor you didn’t get much economic benefit from having an IQ of 120. But in the 20th century, being smart started to pay off big time. The rise of computers, complex physics, complex financial products etc. meant that having brains equalled power and money.

As a result, according to Murray, we’ve seen the rise of the intellectual class: smart people, usually coastal, who have segregated themselves off from the stinking, steaming masses (my words, not his.) You could reasonably make the case that the election of Donald Trump was the revenge of the great unwashed against the intellectual class.

So why should we really care? Well, many people question whether this disparity is about to get a whole lot worse. If we are on the verge of a genetic engineering revolution then the intelligent class many soon be able to become a whole lot more intelligent (and healthier, and better looking, etc.) This Vox interview with a science historian gets to the crux of it.

Well, let’s put it this way: If only rich people have access to these technologies, then we have a very big problem, because it’s going to take the kinds of inequalities that have been getting worse over recent decades, even in a rich country like ours, and make them much worse, and inscribe those inequalities into our very biology.

So it’s going to be very hard for somebody to be born poor and bootstrap themselves up into a higher position in society when the upper echelons of society are not only enjoying the privileges of health and education and housing and all that, but are bioenhancing themselves to unprecedented levels of performance. That’s going to render permanent and intractable the separation between rich and poor.

Currently, we might have a situation where some poor kid struggles to get through his computer coding class whereas a rich kid who got a Mac on his 4th birthday and had a personal tutor for years sails through it. In the future that poor kid is struggling against a rich kid who had his DNA genetically altered for high IQ (and had all the other stuff too.)

Good fucking luck, poor kid.

Charles Murray and the split in the Democratic Party

I’ve been listening to a rather interesting interview Sam Harris did with Charles Murray. Murray is, of course, famous for authoring “The Belle Curve”, a controversial book that addressed issues of race and IQ.

Part of Murray’s thesis is this: in earlier eras, having a bit more intelligence wasn’t that much of an advantage. If everybody was farming or doing manual labor you didn’t get much economic benefit from having an IQ of 120. But in the 20th century, being smart started to pay off big time. The rise of computers, complex physics, complex financial products etc. meant that having brains equalled power and money.

As a result, according to Murray, we’ve seen the rise of the intellectual class: smart people, usually coastal, who have segregated themselves off from the stinking, steaming masses (my words, not his.) You could reasonably make the case that the election of Donald Trump was the revenge of the great unwashed against the intellectual class.

So far in the interview Murray hasn’t gotten into the future but certainly one can extrapolate various scenarios. Will the intellectual class arm itself with artificial intelligence and increase the brain gap even further? Are regular folks doomed?

Corollary to all this, I start to sense a schism in the Democratic party. There’s the identity politics folks—folks who hated Murray’s book—who often are members of this intellectual class, though probably more as academics than technologists or scientists. Then there’s the economic populists, headed by Bernie Sanders, who see this brain divide leading to significant economic inequality.

You can even see this battle playing out in the liberal journal, Salon. Not long ago I came across this article, an homage to identity politics thinking.

Bye bye, Bernie: He’s not fit to captain the Democratic ship if he can’t stop chasing the great white male

But just today we see, this:

Yes, Bernie would probably have won — and his resurgent left-wing populism is the way forward

Personally, I’m not wild about any of these options. I think identity politics suck and are ineffective politicking, but I generally still stand behind globalization and free markets, contra Sanders and crew.

Your personal robot slave

I’ve often talked here about why I think certain technological developments, namely AI, robotics and 3D printing could radically alter the landscape of employment. I am, of course, hardly the first person or only person to discuss this.

This Salon article is a worthy addition to the debate. The article posits that personal manufacturing robots and 3D printers could allow people to become a factory of one. Have you always wanted to produce and sell a line of rubber figurines in the form of the Loch Ness Monster? With your own personal manufacturing robot you could do so from your basement.

The article states:

This is already beginning to happen. In 2014, there were more than 350,000 manufacturing companies with only one employee, up 17 percent from 2004. These companies combine globalization and automation, embracing outsourcing and technological tools to make craft foods, artisanal goods and even high-tech engineered products.

Many American entrepreneurs use digitally equipped manufacturing equipment like 3D printers, laser cutters and computer-controlled CNC mills, combined with market places to outsource small manufacturing jobs like mfg.com to run small businesses. I’m one of them, manufacturing custom robotic grippers from my basement. Automation enables these sole proprietors to create and innovate in small batches, without large costs.

An interesting idea. Nonetheless, it feels somewhat utopian, doesn’t it? Are we really going to counter-balance the rise in unemployment caused by robots and 3D printers by turning households into small manufacturing units? This might work for a small subset of people, but it seems unlikely to be salve to the larger problem.

A commenter on the post makes a funny and similar point:

Some good points, but this techno-hipster bullcrap about the future being dufus hipster makers with at home 3D printers and trained on LEGO Mindstorms making artisanal pickle jar openers being the future only serves those who are selling the hipster shovels.