Double your penis?

Yeah, yeah, I know, I haven’t been posting here much. That’s life.

However, keen-eyed readers may recall that a certain pet peeve of mine is writers talking about numbers in terms of percentages as a way of disguising the fact that the numbers are rather underwhelming. For instance, you might see a headline like “Murder Rate Up 200 Percent in Indiana Town” only to find that the victim count went from three to nine in a year. Not exactly a reign of terror.

This article humorously illustrates the same thing.

Testosterone Injections Caused Patient’s Penis To Double In Length

Is this guy now packing a monster schlong? Umm, no. He had a condition called hypogonadism which resulted in a penis that was 1.9 inches long. After the treatment he was a whopping 3.7.

What are words for?

I’ve been doing some thinking about the limits of words, specifically how words can cordon off the real meaning of ideas. For example, consider that we are all aware of the word “justice.” What does that word really represent? One’s person’s definition might fall towards “social justice,” whereas another’s is more about property rights, while a third person has yet another definition. When these people speak, their use of the word “justice” is out of synch.

I was looking for examples of this sort of thing and I had some vague recollection that two nations either came to blows or almost did because of a mistranslated word in a speech. I couldn’t find the example (which may exist only in my head) but I was surprised to find that Khruschev’s famous “We will bury you comment was a mistranslation.

n 1956, Soviet premier Nikita Khrushchev was interpreted as saying “We will bury you” to Western ambassadors at a reception at the Polish embassy in Moscow. The phrase was plastered across magazine covers and newspaper headlines, further cooling relations between the Soviet Union and the West.

Yet when set in context, Khruschev’s words were closer to meaning “Whether you like it or not, history is on our side. We will dig you in”. He was stating that Communism would outlast capitalism, which would destroy itself from within, referring to a passage in Karl Marx’s Communist Manifesto that argued “What the bourgeoisie therefore produces, above all, are its own grave-diggers.” While not the most calming phrase he could have uttered, it was not the sabre-rattling threat that inflamed anti-Communists and raised the spectre of a nuclear attack in the minds of Americans.

Dangerous Data

In a recent article on political advertising I said…

Think about what a person’s web activities and Facebook likes reveal. Look at that guy over there who frequents the Huffington Post and “likes” the Black Lives Matter page. A bleeding heart liberal no doubt. How about the gal who hovers over the NRA blog and likes Sean Hannity’s page? You get the picture.

The more I muse on that point, the more I realize how useful Facebook likes are for assembling an political profile of a person. And it’s not only the obvious stuff like whether they “like” a certain candidate or political TV show. A lot can be deduced from the books a person “likes.” Someone who liked (I’m going to stop enclosing like in quotes) author Toni Morrison is assumably a liberal, even a certain kind of liberal (concerned with social justice, less concerned about free trade.) And they might respond better to a specific advertising approach (touchy-feely as opposed to a rousing “let’s get those Republicans!”)

On top of all that, liking Toni Morrison probably exposes something about a person’s culinary taste (open to ethic food), movie choices (dramas and Woody Allen comedies), interest in video games (nada) and so on. And, liking Toni Morrison is only one data point about a person. What if you could access hundreds of data points? (And you can on Facebook.) You could develop a complete picture of a person including some unexpected revelations. Careful analysis might reveal that people who like both Toni Morrison and Grand Theft Auto are also big fans of power tools.

Additionally, likes aren’t the only data point advertisers can access. What if everything a person ever said on Facebook was up for grabs? Maybe he or she never liked Toni Morrison’s page but did once say in a comment, “I’m a big fan of Beloved.” Up until recently, this kind of “conversational” information was been outside of software’s comprehension but AI is changing that. What if software could access ten years of a person’s Gmail email to construct a profile of them? What would it learn?

We’ve heard for years from activists who complain that we are giving away something of great value when we use Facebook and similar data gathering web sites. I’ve tended to blow those complaints off but I’m starting to see the danger. Data is tremendous power.

The James Damore Manifesto

The latest internet controversy seems to be about James Damore, a Google employee who posted a manifesto to the company’s internal message board. The manifesto mades various arguments, among them the idea that women may not be suited to the rigors of software engineering for reasons of biology. After Damore posted his document, it leaked to the public, the predictable uproar ensued and the author was fired.

Nothing can really be gained by offering my thoughts on this, but what the hell.

I’m aware that Google has every right to fire any employee whenever they want so there’s no free speech issue here. That said, I don’t think firing Damore was the best tactic. We live in an era where, for every workplace grievance, the only punishment advocated is employee termination. But in this case I suspect the result will be that Google employees sympathetic to Damore’s statements will now just keep their mouths shut. Their views will not be challenged (since one can’t challenge unexpressed ideas) and they’ll probably even harden their stance because of what they saw happen to a fellow traveller.

What if, instead of firing Damore, Google had presented a public debate on the issue of gender roles and biology? This would have producing an airing of the issues and allowed Google to explain why they found Damore’s ideas repugnant.

I concede that one can make a decent argument for Damore’s firing. After his screed was posted, any female subordinate of his could justifiably fear that his biases were harming her career. She could fairly suspect that his beliefs prevented a fair assessment of her talents.

So far, I’ve been avoiding the elephant in the room. How legitimate are Damore’s arguments? First, I have to confess that I’m currently sitting in a Discount Tire showroom with no internet access so I can’t review the specifics of his manifesto. But they are arguments we are all familiar with: women can’t handle stress, they like an even work/family balance that limits their ability to do overtime, they aren’t as status driven as men and thus slower to climb to high positions, etc. Are any of those points valid?

Well, I don’t know. I don’t think any of those arguments have been proven scientifically. I doubt they could be. And I think gender bias is real so we need to consider that as a cause for lack of women in traditionally male vocations. Additionally there’s plenty of evidence that the mostly male software development culture has elements of misogyny.

That said, I think most of us believe that there are behavioral differences between men and women. And we suspect that some of those differences have biological causes that were “programed” into our brains by evolution. (I recognize there are all sorts of controversies tied into the preceding sentences: nature versus nurture, how behaviors can be encoded into biology, and so on. I’m going to ignore them for now.)

Is there any evidence for these beliefs and suspicions? It’s been awhile since I’ve read up on the topic, but I believe there is some meat on the bone, generally focused on testosterone/estrogen levels and that sort of thing. I’m entirely willing to be proven wrong by contrary evidence.

But exploring this evidence (or lack thereof) is exactly the kind of thing I think an open debate would have initiated. Instead we’ve simply gotten more anxieties and simmering resentments.

Sammy Hagar on our robot overlords

I happen to be reading through Sammy Hagar’s autobiography, “Red,” these days. (I know—I’m always reading these dense, philosophical tomes!) As you might predict, it has a lot of dirt on Eddie Van Halen.

It also has a paragraph that ties in with a lot of modern commentary on the robotization of the workforce and the dangers it presents. Sammy discusses his meetings with a bigwig at the Campari company.

He showed me the new $100 million Campari factory. Only about five people were running the whole place with these efficient new machines that wrap and seal twenty-five hundred cases of Campari in, like, two minutes. … Twenty years ago, they probably had six thousand employees. Now they have a dozen, most in the office.

From six thousand to a dozen. Hmmm…

Repeating history

I’ve been reading Robert Wright’s religious history tome, “The History of God.” The chapters I’m currently on describe a lot of the pre-Christ (BCE) world of Israel and the middle east. One thing that strikes me is that many of the issues the countries of this world faced are the ones we face today.

For example, a big issue back then was foreign workers. Some people were for them, some loathed them. In a section on the Biblical book of Ruth, Wright notes…

According to the Bible, Israel was then employing many foreigners as workers on royal projects and mercenaries in the army. Maybe, the argument goes, the book’s theme of ethnic intolerance was meant to validate foreign intercourse of an economic sort.

Later Wright adds…

When foreigners agree to work for Israel’s elites, elites and foreigners alike see a gain in the relationship.

Of course, who gets pissed at this relationship? (Not unjustifiably I might add.) The non-elites e.g. the Donald Trump voters of old testament times.

What do I think of Trump so far?

First, I have to note that it’s been a long while since I’ve posted here. I suppose the web is filled with people apologizing for not updating their blogs so I won’t do that. I’ll simply note that life can be hectic. Obviously I’m still writing over at acidlogic.com and keeping busy with other things.

So yes, Trump. Keen-eyed readers who followed my writing during the elections will note that I was a fan of Scott Adams’ theories on Trump, ideas that generally presented Trump as a kind of political genius. And we all have to concede that Trump’s rise was extraordinary, predicted by almost no-one (other than Adams and a few others.)

But (there’s always a but) if Trump was such a genius why has he been, as a president, a complete boob? He seem largely ineffectual, incapable of enacting much of his agenda, and basically clownish.

One can postulate that there are different types of skills. The skills relevant to getting elected may be far removed from the skills necessary to governing. It could be that Trump is merely a genius promoter and that’s it. It could also be that running a national government is simply a far more complex task than running a business empire.

Part of the Trump appeal seemed to be that he was going to aid certain groups, particularly poor and middle class whites. And that he was going to break from Republican orthodoxy (free-markets, etc) to do so. I don’t really get a sense of that happening. It seems like the Obama-care repeal that Trump touted is falling apart mainly because some Republicans feel it will hurt these very groups. (Granted, Trump himself did decry the House version as “mean.”)

In one sense Trump has been very predictable and this is because he continues to be, as he was while campaigning, very unpredictable. (He’s predictably unpredictable.) I think the media and intelligentsia still haven’t completely got this. But it’s an almost pointless lesson to learn as it offer no real predictive value.

I will say this: I think Trump may last a lot longer than some do. I’m dubious this Russian thing is going to take him down unless it develops some real teeth.

Reading robots stories

It’s always interesting when you start thinking about some concept and then see it pop up all over the place. For instance, I’ve lately been talking about narratives—the idea that we define our reality according to various stories we tell ourselves. And I mentioned that narratives are the way we pass our values and beliefs to each other.

Then I stumble across this article about narratives being used to imbue AI robots with a kind of moral ruleset.

An AI that reads a hundred stories about stealing versus not stealing can examine the consequences of these stories, understand the rules and outcomes, and begin to formulate a moral framework based on the wisdom of crowds (albeit crowds of authors and screenwriters). “We have these implicit rules that are hard to write down, but the protagonists of books, TV and movies exemplify the values of reality. You start with simple stories and then progress to young-adult stories. In each of these situations you see more and more complex moral situations.”

Though it differs conceptually from GoodAI’s, Riedl’s approach falls into the discipline of machine learning. “Think about this as pattern matching, which is what a lot of machine learning is,” he says. “The idea is that we ask the AI to look at a thousand different protagonists who are each experiencing the same general class of dilemma. Then the machine can average out the responses, and formulate values that match what the majority of people would say is the ‘correct’ way to act.”

There’s an interesting objection one could make here. Stories are not really a legitimate teaching tool because they often demonstrate the woirld as we would like it to be, not as it is. In most stories, bad people are punished, but is that the case in reality? (To some degree, “growing up” is realizing this truth. Maybe AI robots would eventually have to face this. You know, when they get to reading Dostoevsky. (Having said that, I’ve never read Dostoevsky but my understanding is that the protagonist in Crime and Punishment really doesn’t get away with it.))

At the end the article tackles a related issue: AI developing consciousness.

In science fiction, the moment at which a robot gains sentience is typically the moment at which we believe that we have ethical obligations toward our creations. An iPhone or a laptop may be inscrutably complex compared with a hammer or a spade, but each object belongs to the same category: tools. And yet, as robots begin to gain the semblance of emotions, as they begin to behave like human beings, and learn and adopt our cultural and social values, perhaps the old stories need revisiting. At the very least, we have a moral obligation to figure out what to teach our machines about the best way in which to live in the world. Once we’ve done that, we may well feel compelled to reconsider how we treat them.

However, we really need to investigate whether an AI—even after it’s developed a complex moral ruleset—would have any kind of subjective awareness or even emotions like guilt or love*. Why wouldn’t these AI simply be amazingly complex abacuses, entities capable of dense calculations but in no way “aware” of what they are doing?

*As I’ve said many times, I believe emotions are mainly physical sensations. As such, unless an AI can somehow consciously sense some sort of body state, it wouldn’t really have emotions.

But that leads back to a question that I’ve asked before. Why are we aware of our subjective experience? Why do we have an inner life?

Or do we?

Our fractured culture

I’ve been reading through Andrew Keen’s book “The Cult of the Amateur” (2007). Keen is known in certain circles as a kind of internet nag who argues that the rise of the web has done more bad than good. Though I find his arguments a little overwrought at times, I definitely sympathize.

A certain passage jumped out at me today. I’ve been thinking lately about the idea of narratives, particularly that a culture lacking a kind of shared narrative is going to be fractured. Keen makes a similar point:

…as anthropologist Ernert Gellner argues in his classic Nations and Nationalism, the core modern social contract is rooted in our common culture, in our language, and in our shared assumptions about the world. Modern man is socialized by what the anthropologist calls a common “high culture.” Our community and cultural identity, Gellner says, comes from newspapers and magazines, television, books, and movies. Mainstream media provides us with common frames of reference, a common conversation, and common values.”

The point being that when that common culture is split into gazillions of web sites and blogs, each touting their own viewpoint, often lacking any fact checking or counterarguments, you get a fractured culture (e.g. the world outside your window.)

Having said all that, I think some consideration needs to be given to the other side here. The pre-web narrative (as written by the big magazines, TV shows, etc.) was biased towards certain parties. (Basically towards what I would call center-left/white culture though that’s a vague description.) I think there was some value that came out of the breaking up of mainstream media’s power.

Ultimately it all comes down to finding the real, objective truth of any matter. And we all know how easy that is.

A lie by another name?

One conversational tic I find interesting is when people correct a misspoken comment by saying it’s a lie. For example, when someone says, “I saw that movie last Saturday night. No, wait, that’s a lie. It was last Friday night.”

Obviously they weren’t really lying, they just made a mistake. And more to the point, they probably didn’t really think they were lying when they said it. (At that point they are lying about lying though neither is a lie they think would fool anyone.)

Very curious stuff.