The death of individuality

I’m working on my next article for acid logic and it’s essentially a list of modern day fears that I think could be exploited by horror movies creators. One fear is fairly esoteric: a fear of the loss of identity brought about by the hyper-connectedness of the age. In essence, we are so hooked in to each other that when a subject comes up we immediately know what everyone else thinks about it and tailor our opinions and ideas to match the group we want to associate with. (Political tribes are an obvious example of these groups.)

The fear is not so much about this process but the crisis of self it could bring about. If you wake up one day and find that your opinions totally match some subset of the masses, would you start to wonder whether you really exist on a meaningful level? Would you conceive of yourself as merely a vessel for popular opinion?

Tackling the new atheists

In one of my recent articles on heretical ideas I noted that a) I’m an atheist, and b) I don’t think there is any way to divine morality. This puts me at odds with most of the “New Atheists” like Sam Harris and Richard Dawkins who argue we can have morality without God.

There’s an interesting article in the Spectator by a writer named Theo Hobson. He is presumably religious and takes a rather snippy tone to atheists. But I think he makes some points that complement my own and his piece is worth reading if you need something philosophical to curl up with.

In these paragraphs he points out something I’ve thought about. New atheists dismiss faith, but their insistence that some kind of moral rule can be ascertained sounds an awful lot like faith itself.

The trouble is that too many atheists simply assume the truth of secular humanism, that it is the axiomatic ideology: just there, our natural condition, once religious error is removed. They think morality just comes naturally. It bubbles up, it’s instinctive, not taught as part of a cultural tradition. In The God Delusion Richard Dawkins tries to strengthen this claim using his biological expertise, arguing that humans have evolved to be altruistic because it ultimately helps their genes to survive. But in the end, he admits that no firm case can be made concerning the evolutionary basis of morality. He’s just gesturing with his expertise, rather than really applying it to the issue at hand.

Here’s his muddle. On one hand he believes that morality, being natural, is a constant thing, stable throughout history. On the other hand, he believes in moral progress. To square the circle he plunges out of his depth, explaining that different ages have different ideas of morality, and that in recent times there has happily been a major advance in our moral conventions: above all, the principle of equality has triumphed. Such changes ‘certainly have not come from religion’, he snaps. He instead points to better education about our ‘common humanity with members of other races and with the other sex — both deeply unbiblical ideas that come from biological science, especially evolution’. But biological science, especially evolution, can be used to authorise eugenics and racism. The real issue is the triumph of an ideology of equality, of humanism. Instead of asking what this tradition is, and where it comes from, he treats it as axiomatic. This is just the natural human morality, he wants us to think, and in our times we are fortunate to see a particularly full expression of it.

It’s interesting that he argues that new atheists feel moral truth is “instinctive.” I tackled this very premise in my article.

Another New Atheist, Sam Harris, hints at something similar in this Big Think video when he says (after arguing that we don’t need God for morality) that we have “some very serviceable intuitions about what good and evil are.” The problem, however, is that feelings and intuitions (programmed into us via evolution or not) are not a logical means from which we can define moral behavior. Most of us would agree that proposing the murder of a 10 week old baby feels wrong, but that doesn’t mean it can be logically shown to be so. We can even construct scenarios where killing the baby is the right thing to do for the greater good (say, the baby is the carrier of a deadly disease that cannot be allowed to spread). In such cases, killing the baby might be the right thing to do (according to conventional ethics) but I think we all know that it would still feel awful to carry out the act. From that we must conclude that feelings/intuitions are not a trustworthy source of divining morals.

Robo-sales

It seems like all I ever do around here is comment on the increased use of robots and A.I. software in the workforce. (But I sure look good doing it!) I’ll try and keep it in check. Nonetheless, this little tidbit caught my eye. It’s from an article in the April 3, 2014 New York Times Book Review called “The Programmed Prospect Before Us.”

Recently, Michael Scherer, a Time magazine bureau chief, received a call from a young lady, Samatha West, asking him if he wanted a deal on health insurance. After she responded to a number of his queries in what sounded like a prerecorded fashion, he asked her point blank whether she was a robot, to which he got the reply, “I am human.” When he repeated the question, the connection was cut off. Samantha West turned out to be a system of recorded messages that were part of a computer program created by the brokers of health insurance.

If this is true, this is such a clumsy application of computer intelligence that it makes me think we have nothing to worry about. Nobody likes talking to robo-software, much less robo-software that claims to be human. It seems these sorts of efforts are designed simply to annoy people. (Note: Read update below for more details on Samantha West.)

Of course, maybe there a kind of spam-mindset going on. “We annoy 100,000 people with bullshit robo-calls but 10 of them actually buy the insurance and we clear our margins.”

The article continues.

The point is not that humans were not involved, but that experts had worked out that far fewer of them needed to be involved to sell a given quantity of health insurance. Orthodox economics tells us that automating such transactions, by lowering the cost of health insurance, will enable many more policies to be sold, or release money for other kinds of spending, thus replacing the jobs lost. But orthodox economics never had to deal with competition between humans and machines.

I’m not sure that final sentence is exactly correct, as a rumination on the tale of John Henry should illustrate, but I get the gist and it’s thought provoking.

Update!
I was intrigued by Samatha West to look up details. It turns out they are a little cloudier than stated above though the concerns raised are legitimate.

Robot-denying telemarketing robot may not actually be a robot.

As Time is now reporting, the telemarketing robot is actually a computer program used by telemarketers outside the United States. According to John Rasman of U.S.-based Premier Health, the system allows English speaking telemarketers with thick non-American accents to sort through leads to find real prospective buyers before passing them off to agents back in the United States. “We’re just contacting people in a way they’re not familiar with,” said Rasman. The human agents who trigger Samantha West’s responses act as brokers for health insurance companies inside the U.S.

So it’s not robots stealing our jobs, it’s those damn foreigners!

Robo-milking

I frequently talk about the robotization of the workforce. Today’s NY Times has an article that caught my eye. “With Farm Robotics, the Cows Decide When It’s Milking Time.

In essence, robotic devices are now available to milk cows. It frees humans from a disagreeable task and…

The cows seem to like it, too.

Robots allow the cows to set their own hours, lining up for automated milking five or six times a day — turning the predawn and late-afternoon sessions around which dairy farmers long built their lives into a thing of the past.

With transponders around their necks, the cows get individualized service. Lasers scan and map their underbellies, and a computer charts each animal’s “milking speed,” a critical factor in a 24-hour-a-day operation.

For some reason I’m reminded of an old New Yorker cartoon my dad often recollects. A farmer is milking a cow and the cow looks back at him and says, “Gently, please. It’s Mother’s Day.”

The rise of goop

An interesting idea dawned on me recently. Many people have commented that the advent of 3D printing means that it is really the design of objects, rather than the objects themselves, that has value. The day may arrive when if you need a new thing—say, a coffee cup—you just use the design to print one off.

Of course, it’s not merely the design of the object that has value there; the raw materials used to make the thing—likely plastics and metals—have value. Maybe we’ll all have a pile of goop that we use to make new things. And if we run out of goop we’ll simply melt down some existing objects back to their goop form. “We’re out of scissors? Melt down the old television and use that!”

George Costanza was right!

Seinfeld fans may recall the episode where George decides to give himself the nickname “G-Bone.” Upon hearing this Jerry says, “There’s no such thing as a g-bone. There’s a g-spot.” Furiously George replies, “That’s a myth!”

According to the book on genetics called “Identically Different” George was right.

Although it’s very hard to prove the non-existence of something, we concluded that as the G spot is lacking in academic credibility among gynecologists, has not been found by scans or anatomists, and had not the tiniest genetic influence, it was probably a figment of the modern imagination. It was more likely an area through which the base of the clitoris can be felt and stimulated in some women. Our conclusions were not popular. We got many angry letters from Italian and French sexologists who charge their patients to find their hidden g spots and from plastic surgeons who increasingly do lucrative enhancement surgery by bulking up ‘the spot’ with injection of fillers like collagen. We also received outraged letters from ‘male expert lovers’ who claimed to have satisfied many women by uniquely being able to find their G spots. Strangely we didn’t receive a single letter from a woman.

Late breaking penis news!

Been a while since I’ve posted some penis news but I think this qualifies.

Wu-Tang Clan-associated rapper cuts off penis, jumps off building in suicide try

It’s like the old saying: If cutting off your penis doesn’t kill you then the jump from the building will!

Except, in this case it didn’t.

Incredibly once he hit the pavement they said he got back up on his feet and began running around, albeit incoherently.

Oh well… old sayings aren’t 100%.

The movie industry bloodbath has begun

I’m often rather loudly complaining around here about the devaluation of entertainment products brought about by the internet. This is partly because the internet engenders piracy, but also because piracy itself engenders creators to offer their work for free (because it’s probably going to end up available for free anyway.) The result is the destructions of big chunks of the entertainment industry.

We’ve primarily seen this in the music business. But it stands to reason that as movies become more downloadable, the same thing could happen there. According to this excerpt from a book by screenwriter Lynda Obst, it is.

I leaned back a little on Peter’s comfortable couch, and he sat forward to say, “People will look back and say that probably, from a financial point of view, 1995 through 2005 was the golden age of this generation of the movie business. You had big growth internationally, and you had big growth with DVDs.” He paused to allow a gallows laugh. “That golden age appears to be over.”

“The DVD business represented fifty percent of their profits,” he went on. “Fifty percent. The decline of that business means their entire profit could come down between forty and fifty percent for new movies.”

For those of you like me who are not good at math, let me make Peter’s statement even simpler. If a studio’s margin of profit was only 10 percent in the Old Abnormal, now with the collapsing DVD market that profit margin was hovering around 6 percent. The loss of profit on those little silver discs had nearly halved our profit margin.

This was, literally, a Great Contraction. Something drastic had happened to our industry, and this was it. Surely there were other factors: Young males were disappearing into video games; there were hundreds of home entertainment choices available for nesting families; the Net. But slicing a huge chunk of reliable profits right out of the bottom line forever?

There it was. Technology had destroyed the DVD. When Peter referred to the “transition of the DVD market,” and technology destroying the DVD, he was talking about the implications of the fact that our movies were now proliferating for free—not just on the streets of Beijing and Hong Kong and Rio. And even legitimate users, as Peter pointed out, who would never pirate, were going for $3 or $4 video-on-demand (VOD) rentals instead of $15 DVD purchases.

Frankly, I never understood why people paid 15 bucks to own a DVD movie but I guess they’ve come to their senses on that one. Netflix is probably a big reason for that as you can essentially buy a huge streaming dvd collection for 7 bucks a month.

So what does this collapse mean in terms of movie quality? I think Obst’s article ties into an article I wrote a while back about the noticeable decline in the quality of current film’s stories. I used the blockbuster “WWZ” as an example.

On top of that, “World War Z” was just poorly written. There’s was no sense of ratcheting tension, no sense of real danger. The hallmark of the great horror films is that some of the characters—sometimes characters you really love—get killed. (Even “Shaun of the Dead,” which was something of a horror satire, got this.) Nobody you like in “WWZ” dies. (This is partly because you don’t like any of the characters but that’s another complaint.) And unlike the book, the movie “WWZ” is devoid of clever plot twists. The main conceit of the film—the means by which Pitt formulates a way of stopping the zombies—barely generates a “meh.”

“World War Z” had the sense of being written by committee. When a story is written this way, any interesting proposed plot twist (say, killing a key character, or having a likeable character betray the group) is bound to upset someone in the room. If everyone working on the story is granted veto power, all life gets sucked of a tale.

To quote Obst:

[The studios are] frozen, so the gut is frozen, the heart is frozen, and even the bottom-line spreadsheet is frozen. It was like a cold shower in hard numbers. There was none of the extra cash that fueled competitive commerce, gut calls, or real movies, the extra spec script purchase, the pitch culture, the grease that fueled the Old Abnormal: the way things had always been done. We were running on empty, searching for sources of new revenue. The only reliable entry on the P&L was international. That’s where the moolah was coming from, so that’s what decisions would be based on.

Gut calls are part of what lead to interesting, innovative movies. And deference to the international market means you have to dumb content down for non-English speakers and those who may not get the nuances of certain kinds of storytelling.

As I mention in my article, I think cheap horror flicks are still willing to take risks, as they always have. But I’m curious as to whether they are making any money.

Programming hunger

Last summer I had an experience that got me thinking about how much food we need to eat. I was in Paris with my Mom, and found that even though we were walking around most of the days, we only ate a couple meals per day. We had a breakfast, mostly of bread (you know the frogs and their bread) and then a regular meal in the afternoon. It was less than I would normally eat at home, yet I was never hungry.

The New Yorker blog has a post that connects to this, noting that why we get hungry is often unconnected to our need for energy.

More often than not, we eat because we want to eat—not because we need to. Recent studies show that our physical level of hunger, in fact, does not correlate strongly with how much hunger we say that we feel or how much food we go on to consume.

Even if you’ve had an unusually late or large breakfast, your body is used to its lunch slot and will begin to release certain chemicals, such as insulin in your blood and ghrelin in your stomach, in anticipation of your typical habits, whether or not you’re actually calorie-depleted.

This probably doesn’t surprise anyone, indeed I think we all observe this. You’re not hungry at all but a plate of fried chicken passes before you and whammo—pig out city!

The article states that we start to see part of our environment as cues to eat. We have a great snack on our favorite couch and we become conditioned—like Pavlov’s dogs—to associate that couch with snacking. We drive to the dentist and are reminded how there’s a great donut shop nearby and we start to crave donuts. I think this is partly why I experienced so little food craving in Paris—it is a city unfamiliar to me and I had not programmed in the environmental cues to stimulate hunger.

On a side note, I recall reading about a very ineffective campaign against drug use that was set up by some city. (I read about this a while back; can’t recall the location.) The government placed billboards in ghetto neighborhoods saying things like “Cocaine: It’s Evil” and showing a big pile of cocaine. Of course the result was rehabbing drug users saw these signs and thought, “Oh man I would love to snort a pile of coke like that right now!”

Breatharians

Today in my readings I came across mention of something I’d never heard of: breatharians. These are people who believe they can live without food by subsisting on air and sunlight. It sounds insane of course, but a google search reveals plenty of conversation about the topic. How do they do it? Well, for the most part they don’t.

In 1983, most of the leadership of the movement in California resigned when Wiley Brooks, a notable proponent of breatharianism, was caught sneaking into a hotel and ordering a chicken pie.[

Mmmm… chicken pie.

Also note:

Under controlled conditions where people are actually watched to see if they can do it without sneaking some real food, they fail. The name most commonly associated with breatharianism is Australia’s Jasmuheen (born Ellen Greve), who claims to live only on a cup of tea and a biscuit every few days. However, the only supervised test of this claim (by the Australian edition of 60 Minutes in 1999) left her suffering from severe dehydration[4] and the trial was halted after four days, despite Greve’s insistence she was happy to continue. She claimed the failure was due to being near a city road, leading to her being forced to breathe “bad air”. She continued this excuse even after she was moved to the middle of nowhere.

The various forms of human insanity seem to have no limits.