Archive for the 'Technology' Category

It’s Alive! Alive!

Lately I’ve been exploring this idea that we don’t know what consciousness is. I considered the the possibility that consciousness could be some kind of “force.” My theory was that when this force travels through a complex network, like our human brain, it/we/something experiences what we call subjective consciousness.

I also asked: could this force simply be electricity (or the electromagnetic force?) It seems all too simple and rather Frankenstein-ian. I’ve done a bit of reading and the consensus seems to be “no” though I need to read more.

One of the articles I read had some juicy tidbits on past experiments of applying electricity to the dead.

WIRED: What Happens If You Apply Electricity to the Brain of a Corpse?

In 1802, Aldini zapped the brain of a decapitated criminal by placing a metal wire into each ear and then flicking the switch on the attached rudimentary battery. “I initially observed strong contractions in all the muscles of the face, which were contorted so irregularly that they imitated the most hideous grimaces,” he wrote in his notes. “The action of the eylids was particularly marked, though less striking in the human head than in that of the ox.”

In 1803, he performed a sensational public demonstration at the Royal College of Surgeons, London, using the dead body of Thomas Forster, a murderer recently executed by hanging at Newgate. Aldini inserted conducting rods into the deceased man’s mouth, ear, and anus.
One member of the large audience later observed: “On the first application of the process to the face, the jaw of the deceased criminal began to quiver, the adjoining muscles were horribly contorted, and one eye was actually opened. In the subsequent part of the process, the right hand was raised and clenched, and the legs and thighs were set in motion. It appeared to the uninformed part of the bystanders as if the wretched man was on the eve of being restored to life.”

So what is information anyway?

With the advent of artificial intelligence (AI) there’s a lot of talk about computers knowing things, or processing information. But how does this actually work?

I’ll be upfront here and say, “I don’t know,” at least in any detailed sense. But thinking out loud on the topic might turn up some interesting observations.

Computers have been information processing for ages (and before computers, calculators, abacuses etc. were doing it.) With AI, computers are simply processing information better, faster and “deeper” than ever before.

But what is really going on when we say a computer processes “information”? What information?

Let’s first consider the notion of a “bit.” The term comes from the relatively recent discipline of information theory and refers to the smallest unit of information possible. In essence, it’s a yes or no question. For example, let’s say I was tracking information about the couches in my couch factory. These couches come in three colors—red, green and orange. So I could track that information in three bits: a bit that gets marked “yes” if the couch is red, a bit that gets marked “yes” if the couch is green and a bit that gets marked “yes” if the couch is orange. Actually I could get away with using only two bits by saying, “if the red bit is set to no and the green bit is set to no then the couch must be orange.”

When you look out at the world, you can basically describe it using bits. Look at your best friend. Are they male, yes or no? Do they have a mustache, yes or no? Do they read this blog, yes or no? Are they gay, yes or no? And on and on…

You can see how this can be a remarkably effectively tool, and this tracking of bits is what drives computing. For example, images can be “held” in a computer if you track the red, green and blue value (represented as a number which can be captured as a series of bits*) for each pixel, plus, I think, luminescence and maybe a few other things.

* More detailed explanation here, if you care.

But it’s key at this point to take a step back and realize that just because computers hold information about couches, best friends or images, that doesn’t mean they really know anything. They know nothing, because they are basically dumb electrical signals shuffling around. A computer knows the image it contains no more than an abacus knows the number value it just helped add. Both tools require a human being to come along and observe the information being represented. Without the human, a computer’s information is a bunch of yeses or nos, devoid of context or purpose.

I’m pretty sure some information theorists would disagree with some of what I’ve said here, but this is how I see it.

So that makes us feel pretty special as humans, right? We know stuff whereas these dumb computers just sit there twiddling their switches. But do we really know anything?

Like computers, we also seem to hold information in bits of a sort. We have neurons and they fire or they don’t*. (I believe I’m correct in saying neurons can actually impart more than just yes or no values because they can fire at different strengths. To be honest, I’ve never really been clear about that but for the purposes of this post we merely need to agree that neurons hold information in some way.) So, you observe a coffee cup and various neurons that activate for round shapes start firing, as do neurons that activate for the smell of coffee, past memories of coffee, the general sense of being amped up and awake and on and on. Our brain “represents” the coffee cup using a lot of bits… I dunno how many. And we are aware of this represented information with different degrees of awareness. I might be strongly conscious of the notion: that is a coffee cup, but I’m less aware of the sense that coffee tastes bitter, or that it has caffeine.

*I’m aware that information in brains is really held in the connections between neurons (synapses), but I think this explanation works for our purposes.

My point here, and I do have one, is this: with computers, we track information about objects (or concepts or whatever) but we understand that that information is meaningless until a conscious agent, probably a human, comes along and observes it. But brains also track bits of information. So who/what is the conscious agent that is required to observe that information in our brains and “convert” it from meaningless bits to useful information? This could be another way of asking, “What is consciousness?”

While thinking about this I stumbled across this interesting quora question with fascinating answers (though no conclusive answers.) How much information does a human brain neuron store?

Remote controlled robot workers

I’ve been reading a fascinating book entitled “The Rise of the Robots.” It is, as you might suspect, all about celery gardening.

I jest, of course. It is about robots and how the automation of physical and mental work threatens our economy.

There’s a interesting little premise that barely gets a mention in the book (so far) but seems worth describing here. We all understand that a lot of physical labor in product manufacturing has been replaced by robots. This has been easy because manufacturing involves a lot of repetitive tasks which robots excel at. What is harder for robots to do is diverse physical labor like what a janitor does. A janitor doesn’t repeat the same exact task over and over again. He might clean one bathroom, then mop a floor, then clean some windows, then throw out some boxes etc. And he might does this in a variety of buildings with different floor plans etc. Getting a robot to navigate all these tasks is still difficult. For one thing, while technology for robots to “see” is improving, it’s still not perfect.

But imagine this. You make an ambulatory robot with cameras attached and “grabbing” mechanisms as hands. You give control of this robot, via the internet, to some guy in an Indian call center. The Indian guy, payed a pittance, provides the seeing and motion control for this robot. By running the robot, probably via something like a video game interface, the Indian does the work.

Could it be that this process of human/robot collaboration could put real, first world janitors out of work (especially with the push to raise the minimum wage)?

But wait. As they say, it gets worse. As this robot is being guided through the janitorial tasks isn’t it being trained to do these jobs all by itself? For example, let’s say the Indian guy guides the robot to wash the windows in a particular building once a week. As he guides the mechanical appendages through the process of window washing, the robots records them. Next week it can do this job all by itself. Really, the human is only needed to train the robot a few times, then the robot takes over. (This is sort of how the robot Baxter, working in factories as we speak, operates.)

And don’t get me started on remote controlled robot prostitutes!

Data as music

I stumbled across an interesting article discussing how scientists are rendering data in musical form. This, apparently, allows them to sense patterns in the data they might otherwise be unaware of.

Scientists can listen to proteins by turning data into music

Transforming data about the structure of proteins into melodies gives scientists a completely new way of analyzing the molecules that could reveal new insights into how they work — by listening to them. A new study published in the journal Heliyon shows how musical sounds can help scientists analyze data using their ears instead of their eyes.

The researchers, from the University of Tampere in Finland, Eastern Washington University in the US and the Francis Crick Institute in the UK, believe their technique could help scientists identify anomalies in proteins more easily.

“We are confident that people will eventually listen to data and draw important information from the experiences,” commented Dr. Jonathan Middleton, a composer and music scholar who is based at Eastern Washington University and in residence at the University of Tampere. “The ears might detect more than the eyes, and if the ears are doing some of the work, then the eyes will be free to look at other things.”

If you don’t fully comprehend what this all means, well, I’m right there with you. But one can easily envision a way that different values of data could be thought of as steps away from a average value, and those steps could be represented as a musical scale. So really large musical leaps would indicate major deviations from an average.

And here’s another article also about data being transformed into music. I guess this is a “thing.”

Detecting patterns in neuronal dendrite spines by translating them into music

There’s some example of this “dendritic spines as sound” music here and it’s pretty unappealing. (Part of the problem is that it’s rendered with hideous midi instrumentation.)

What separates man from machine?

I just finished an article on computers writing music which got me thinking about computers thinking. (Thinking being a big part of music writing.) When humans compose music, or write stories, or do any artistic pursuit, we are consciously processing our decisions, deciding to do this or that or try this or that idea. If computers can start to replicate these processes, they would be doing them unconsciously. (Unless we want to consider, as some have, that computers are conscious in some weird way, but that’s a debate for another time. For now I will presume they are not conscious.)

So let’s think about this. Let’s say I’m writing a story about a character named Bob who drives his car a lot. In my mind, Bob is a person and his car is an object an I shuffle them through various scenarios that create tension in fiction etc. How would a computer approach writing a story about Bob and his car. (Computers writing fiction is not that far off.)

Well, computers would never really be aware of Bob and his car. Ultimately a computer is simply turning the states of millions of transistors from on to off or vice versa. Bob would essentially just be several bytes worth of data, data simply being a collection of transistors in various states. All words—nouns, verbs, adjectives, etc.—are simply data captured by the state of transistors. The point being the computer never really knows the meaning of the words. At best it “knows” the flow of electricity (and even that statement is a stretch.)

Essentially, computer programs map symbols (letters, music notes, patches of color, etc) on to these transistors. And then they manipulate these mapped symbols to do various things, one of which is to create art. The symbols only have meaning to the audience, which is us humans. In a sense, a novel writing by a computer could be said to not exist until a conscious human reads it.

So what makes us different from computers? We are conscious, obviously, but also these symbols have actual meaning to us. The word “Bob” can have an actual meaning, referring to particular guy, fictional or not, who has various behavioral tendencies, characteristics, a certain appearance etc. To us, Bob (the word) can represent a real person. We can map symbols to ideas/concepts/entitities.

And yet, our brains work in a way pretty similar to computers. Our neurons are powered by electricity and we, in some weird way, hold information in our synapses. So why do we humans experience meaning when computers don’t?

I dunno…

Terrorist Drones

I continue to track Scott Adams’ blog to view his predictions about Trump. But today he tackled a different subject and he made note of something I’ve thought about myself.

Another key part of my prediction is that the Caliphate will start to weaponize hobby-sized drones for attacks all over the world.

Drones, these little miniature helicopters that are popping up all over the place, seem the perfect vehicle to lob explosives into crowds of people. I’m not talking about the rather small drones you see at the park, but the kind of drones that can have cameras mounted on them, or the kind Amazon is testing for deliveries. If it can carry a package it can certainly carry a bomb. And if the sky is awash with Amazon drones how will we tell the legal drones from ones delivering death? (There may actually be a way, in fact, I presume there is, but it complicates things.)

I think the bigger picture here is this realization: for every new technological advance from now on, we need to ask: how could this be employed by terrorists? Certainly robots and drones have obvious implications for terrorists. So too do advances in the biological sciences like the designing of virus in a garage laboratory. And what can people create with 3d printers?

On a side note, recall that it was the Japanese in WWII who first utilized drone bombs.

Data mining the arc of stories

I’ve been thinking a bit about my next acid logic piece which will tie in with a presentation author Kurt Vonnegut once gave on the shapes of stories. He argued that there are limited set of flows that stories can take and that all stories ever written ultimately fit into this set. I might as well let him explain as the presentation is online.

Today I come across an article about an attempt at making computers perform data mining to analyze the arcs of stories stories. Basically a computer “read” numerous stories and ranked the emotional affect of the words to map out the emotional flow of the story. As the article explains…

Their method is straightforward. The idea behind sentiment analysis is that words have a positive or negative emotional impact. So words can be a measure of the emotional valence of the text and how it changes from moment to moment. So measuring the shape of the story arc is simply a question of assessing the emotional polarity of a story at each instant and how it changes.

It was then concluded that there are six basic story arcs or flows, though some novels combine several different arcs into a larger tale.

I’d be curious to see analysis of the flow of other arts forms—music, painting, cinema etc.

5 reasons lists are awesome

I often deride the list-based blog posts and articles that have overtaken the internet, things like “6 Cat Photos That Will Have You On The Floor With Laughter.” That said, I stumbled across this semi-recent New Yorker piece that explains lists’ effectiveness.

One point of appeal is that we have an easier time remembering the content of lists, partly because we think spatially. So we remember a list bullet point partly because we recall where it was in the list. It’s not just a ethereal piece of info, it’s something that was halfway down the page.

As the article intones…

When we process information, we do so spatially. For instance, it’s hard to memorize through brute force the groceries we need to buy. It’s easier to remember everything if we write it down in bulleted, or numbered, points. Then, even if we forget the paper at home, it is easier for us to recall what was on it because we can think back to the location of the words themselves.

Also, lists let you know what you’re getting into; they tell you how much time you’ll have to commit to read them. (This is probably why articles like “786 Reasons to Vote for Hillary Clinton” would never fly.)

The more we know about something—including precisely how much time it will consume—the greater the chance we will commit to it. The process is self-reinforcing: we recall with pleasure that we were able to complete the task (of reading the article) instead of leaving it undone and that satisfaction, in turn, makes us more likely to click on lists again.

Is Facebook controlling you?

Much of what I’ve been reading about and thinking about over the past several months has to do with the notion that people are controllable. Scott Adams’ theories on Donald Trump, which I often mention, state that Trump is a master persuader—he uses rhetorical flourishes and various emotional cues to get people to support him. Parts of the Howard Bloom books I’ve been reading tout the idea that everything is social and that creatures, humans in particular, live and die by whether they and their ideas are accepted by those around them. So we have a strong motivation to go along with the crowd and gain their approval. (I talked a bit about this in my recent article “Are You A Hive Mind?“)

The NY Times has a new op-ed piece called “How Facebook Warps Our Worlds.” It’s pretty familiar stuff: the web and Facebook in particular reinforce our ideas and shield us from contrary notions. (I’m not sure it’s quite true since I see some arguing on Facebook, but I think the idea holds up.) I can definitely see a lot of pressure to think a certain way emanating from one’s social network, pressure that might be subtle enough to not be consciously detected. And that falls right into Adams and Blooms argument: we can be easily swayed to go along with the crowd. To really fight this you have to examine almost all of your assumptions and who’s got the time for that?

As the article notes:

THOSE who’ve been raising alarms about Facebook are right: Almost every minute that we spend on our smartphones and tablets and laptops, thumbing through favorite websites and scrolling through personalized feeds, we’re pointed toward foregone conclusions. We’re pressured to conform.

But unseen puppet masters on Mark Zuckerberg’s payroll aren’t to blame. We’re the real culprits. When it comes to elevating one perspective above all others and herding people into culturally and ideologically inflexible tribes, nothing that Facebook does to us comes close to what we do to ourselves.

I’m talking about how we use social media in particular and the Internet in general — and how we let them use us. They’re not so much agents as accomplices, new tools for ancient impulses, part of “a long sequence of technological innovations that enable us to do what we want,” noted the social psychologist Jonathan Haidt, who wrote the 2012 best seller “The Righteous Mind,” when we spoke last week.

“And one of the things we want is to spend more time with people who think like us and less with people who are different,” Haidt added. “The Facebook effect isn’t trivial. But it’s catalyzing or amplifying a tendency that was already there.”

Thinking about trade agreements

So I was walking along this morning and thinking a bit about an issue you see mentioned during these campaigns: trade. Specifically trade between countries that is regulated by various trade agreements like NAFTA.

Both Sanders and Trump, it seems, are against such agreements and think these agreements have screwed America. Clinton is a bit more complex—I think she was for them before she was against them. (That’s probably a cheap shot but I couldn’t resist.) The other Republicans are probably for the agreements though I don’t know for sure.

So what is the right answer? The Sanders/Trump complaint is something like this, I believe, and I’ll use US/Chinese trade relations as an example: The US is letting Chinese products in with few tariffs added on top. The Chinese can keep their prices low because their workers work for little and there are few environmental protections of the sort that American factories have to impose. So Chinese products are cheaper and can compete with US products in local and inetrnational markets.

The Sanders/Trump solution is, I believe, to renegotiate these trade agreements to impose tariffs on these Chinese products. (Sanders would probably lower tariffs if the Chinese added environmental protections, which would, of course, cost money.) As a result, Chinese products would become more expensive and US products would be better able to compete.

So would the Chinese go along with that? It seems they have a couple options. One is to kowtow to the new agreement and keep access to the US Market. Another is to say, “screw you guys,” and focus on other markets like India, Africa, their own, etc. Another option would be to impose tariffs on our products, thus limiting US manufacturers appeal to the huge Chinese market. (The problem there is that while there are many Chinese, a lot of them are poor and thus not able to buy much.)

What would China do? I have no idea. Thinking about this stuff really makes me realize how little I know about it.

But here’s my main thought. This kind of analysis of these is not something you really see in news coverage of the candidates, or in speeches from the candidates and their proxies. Most of the arguments seems to be that this candidate is better than another for some other reason – he or she is more “qualified,” or has leadership qualities, or whatever.

So why is that? I suspect that for some kind of evolutionary reason people are wired towards cult leaders, not dudes who sit around explaining why their approach to trade agreements is better (YAWN!). If anything, we chose the guy we like, then talk ourselves into his ideas on trade agreements. And this would seem another example where our behavior is not particularly rational at all.

P.S. As a side note to all this I reaffirm my belief that the threat posed to Americans workings by fluid trade will be dwarfed by that of robotics, artificial intelligence and 3D printing.