Archive for the 'Psychology' Category

It’s Alive! Alive!

Lately I’ve been exploring this idea that we don’t know what consciousness is. I considered the the possibility that consciousness could be some kind of “force.” My theory was that when this force travels through a complex network, like our human brain, it/we/something experiences what we call subjective consciousness.

I also asked: could this force simply be electricity (or the electromagnetic force?) It seems all too simple and rather Frankenstein-ian. I’ve done a bit of reading and the consensus seems to be “no” though I need to read more.

One of the articles I read had some juicy tidbits on past experiments of applying electricity to the dead.

WIRED: What Happens If You Apply Electricity to the Brain of a Corpse?

In 1802, Aldini zapped the brain of a decapitated criminal by placing a metal wire into each ear and then flicking the switch on the attached rudimentary battery. “I initially observed strong contractions in all the muscles of the face, which were contorted so irregularly that they imitated the most hideous grimaces,” he wrote in his notes. “The action of the eylids was particularly marked, though less striking in the human head than in that of the ox.”

In 1803, he performed a sensational public demonstration at the Royal College of Surgeons, London, using the dead body of Thomas Forster, a murderer recently executed by hanging at Newgate. Aldini inserted conducting rods into the deceased man’s mouth, ear, and anus.
One member of the large audience later observed: “On the first application of the process to the face, the jaw of the deceased criminal began to quiver, the adjoining muscles were horribly contorted, and one eye was actually opened. In the subsequent part of the process, the right hand was raised and clenched, and the legs and thighs were set in motion. It appeared to the uninformed part of the bystanders as if the wretched man was on the eve of being restored to life.”

Are hippies right? Is energy consciousness?

I have been talking here, of late, about how computers and brains process information and know things. And the gist of my observations is that only conscious beings (e.g. humans and other living creatures) can “know” anything, or derive meaning from the world. Computers can process information, in a sense, but they don’t know the results of their information processing anymore than an abacus knows the results of an addition it just performed.

Some people do say computers could one day become conscious. And I’m open to the possibility; in fact, it ties in with what I’m about to say.

I’ve been operating in a “reasonable” mode for these discussions. Now I’m about to get crazy. I’m the first to admit that everything from this point on is entirely speculative.

So, as mentioned above, I arrived at this conclusion that you need consciousness to “know” things. At that point I need to define what I mean by consciousness. It’s a surprisingly difficult term even though we all experience it all the time. Basically I mean our sense of the reality around us, our internal thoughts, our awareness, the usual stuff.

When you think about it, there’s no reason that information processing devices, like our brains, need consciousness. They (perhaps) are just inputs (our senses) and outputs (our actions/observations), just like a computer, which we presume not to be conscious. So why do we have consciousness? We do we experience a state of being? This question is what the philosopher David Chalmers refers to as “the hard problem of consciousness.”

What if consciousness is a force, sort of like gravity? It “flows” everywhere. And when it flows through a network like a brain—a complex, self-referential, feed-backing network where “wires” (e.g. neurons) often loop back and affect their own inputs—it results in our sense of self and our awareness.

Now this is certainly not my idea. It’s the crux of many religions, Buddhism, Ekhart Tolle-ism, panpsychism, and even the notion of “the force” from Star Wars. I’m simply saying here that this idea could make sense. I don’t see any immediate objection. And, I will say again, this is speculative.

Of course, saying something is a force is a bit of a cop out. When we say gravity is a force we are basically saying that we don’t know what it is. (The same with other forces like the electromagnetic force, or the strong and weak forces of quantum theory.) It’s just a “thing” that happens in semi predictable way. Why it happens, or why it works, is beyond us (though people have theories.)

I maybe totally exposing my naïveté here but I wonder if this force of consciousness is electricity*, since that is what powers the neurons of the brain. Is consciousness electricity going through a complex, feed-backing network? If it is, then the idea of conscious computers doesn’t seem that crazy (since computers are also powered by electricity, though their architecture is obviously not biological.)

* Technically, this would be the electromagnetic force.

If I’m right, living people are sort of like a computer with the power on. Our brains have an architecture which is the arrangement of our neurons (the connectome.) When that architecture has “juice” running through it, you have a living, talking person. When that juice is taken away, you have—you got it–a dead person (similar to a computer with the power off.)

The point that I think a lot of spiritual teacher types (like Ekhart Tolle) argue is that “you” are not your architecture, you are the force flowing through the architecture. And I, a self-described atheist, am conceding that there may be something to this. From this view, becoming “enlightened” is merely the conscious force flowing through one entity becoming aware of itself.

To tackle an obvious question: does this mean we all live forever? Well, not in the sense that you might like. I think your memories, beliefs, thoughts, everything that makes up “you” are held in your brain structure (e.g. connectome). When that goes, you go. But if you are not really that stuff but are rather the force that flows through that network then it could be said we continue in some way.

Anyway, this needs more thought and I realize I’m just rediscovering the wheel here. Others have said these exact thoughts (aside from some of the neuroscience stuff) for eons.

And none of this really explains what consciousness is.

For further reading: Quora answer to “Is conscioussness a form of energy?”

It’s interesting partly for the diversity of opinion and the observation that different people are using the phrase consciousness to mean different things. I’ll note one answer talks about Integrated Information Theory which is the notion that consciousness arises out of complex connection (like those in the network of the human brain.) This is similar to what I describe above (and probably where I got the idea from.)

So what is information anyway?

With the advent of artificial intelligence (AI) there’s a lot of talk about computers knowing things, or processing information. But how does this actually work?

I’ll be upfront here and say, “I don’t know,” at least in any detailed sense. But thinking out loud on the topic might turn up some interesting observations.

Computers have been information processing for ages (and before computers, calculators, abacuses etc. were doing it.) With AI, computers are simply processing information better, faster and “deeper” than ever before.

But what is really going on when we say a computer processes “information”? What information?

Let’s first consider the notion of a “bit.” The term comes from the relatively recent discipline of information theory and refers to the smallest unit of information possible. In essence, it’s a yes or no question. For example, let’s say I was tracking information about the couches in my couch factory. These couches come in three colors—red, green and orange. So I could track that information in three bits: a bit that gets marked “yes” if the couch is red, a bit that gets marked “yes” if the couch is green and a bit that gets marked “yes” if the couch is orange. Actually I could get away with using only two bits by saying, “if the red bit is set to no and the green bit is set to no then the couch must be orange.”

When you look out at the world, you can basically describe it using bits. Look at your best friend. Are they male, yes or no? Do they have a mustache, yes or no? Do they read this blog, yes or no? Are they gay, yes or no? And on and on…

You can see how this can be a remarkably effectively tool, and this tracking of bits is what drives computing. For example, images can be “held” in a computer if you track the red, green and blue value (represented as a number which can be captured as a series of bits*) for each pixel, plus, I think, luminescence and maybe a few other things.

* More detailed explanation here, if you care.

But it’s key at this point to take a step back and realize that just because computers hold information about couches, best friends or images, that doesn’t mean they really know anything. They know nothing, because they are basically dumb electrical signals shuffling around. A computer knows the image it contains no more than an abacus knows the number value it just helped add. Both tools require a human being to come along and observe the information being represented. Without the human, a computer’s information is a bunch of yeses or nos, devoid of context or purpose.

I’m pretty sure some information theorists would disagree with some of what I’ve said here, but this is how I see it.

So that makes us feel pretty special as humans, right? We know stuff whereas these dumb computers just sit there twiddling their switches. But do we really know anything?

Like computers, we also seem to hold information in bits of a sort. We have neurons and they fire or they don’t*. (I believe I’m correct in saying neurons can actually impart more than just yes or no values because they can fire at different strengths. To be honest, I’ve never really been clear about that but for the purposes of this post we merely need to agree that neurons hold information in some way.) So, you observe a coffee cup and various neurons that activate for round shapes start firing, as do neurons that activate for the smell of coffee, past memories of coffee, the general sense of being amped up and awake and on and on. Our brain “represents” the coffee cup using a lot of bits… I dunno how many. And we are aware of this represented information with different degrees of awareness. I might be strongly conscious of the notion: that is a coffee cup, but I’m less aware of the sense that coffee tastes bitter, or that it has caffeine.

*I’m aware that information in brains is really held in the connections between neurons (synapses), but I think this explanation works for our purposes.

My point here, and I do have one, is this: with computers, we track information about objects (or concepts or whatever) but we understand that that information is meaningless until a conscious agent, probably a human, comes along and observes it. But brains also track bits* of information. So who/what is the conscious agent that is required to observe that information in our brains and “convert” it from meaningless bits to useful information? This could be another way of asking, “What is consciousness?”

While thinking about this I stumbled across this interesting quora question with fascinating answers (though no conclusive answers.) How much information does a human brain neuron store?

Can we figure out what information is by looking at how information is held in brains, computers and bee hives?

I’ve been perplexed for a long time by what information really is. We often hear that the brain holds information. How is this so? The gist, as I understand it, is that information comes into our brains via the senses and these “bits” of sensory information are held in the activation of neurons. So, as a child I might have petted a cat and that sensory experience was encoded in my brain. Later, a cat scratched me and that experience was also encoded. Numerous similar experiences occurred (as well as more formal booklearning about cats) and now when I think of cats, all these encoded sensory experiences activate to various degrees and as a result I have information about what a cat is.

Part of the gist there is that I need consciousness to experience sensory input and therefore consciousness is (probably) necessary for information to exist (if “exist” is even the right term.)

Now think of computers. They too hold information (or so we are told.) Information in computers is held in on/off switches which are transistors with electrons running through them. But where is the consciousness? I think they general consensus is “nowhere.” The information in a computer is meaningless until a conscious mind observes it. (Bringing to mind the old “tree/forest” koan.)

And yet, people talk about artificial intelligence gaining consciousness.

Both brains and computers are networks in the sense that they are interconnected nodes. The basic node of the brain is the neuron and the basic node of the computer is the transistor (though it could be anything that can be in an on/off state.) But, again, computers don’t really “know” the information they hold (because they are not conscious.)

Bee hives and ant colonies are also networks and the basic nodes are the individual bees or ants. And hives and colonies seems to “know” information that the individual nodes can’t know (like how to work together to build a bee hive). They can even perform calculations.

So where is this information held? Is there a meta consciousness that must exist to appreciate the information held in the connecting nodes (ants and bees)? Or are they more like computers? Dumb nodes with no self awareness? (I’m aware ant and bees probably have some kind of consciousness but not necessarily the amount needed to appreciate the information they collectively hold.)

(Part of the answer to these question may be found here:
The Remarkable Self-Organization of Ants)

I find myself wondering if information even exists at all.

Narratology

Lately, I’ve become interested in the concept of narratology. Wikipedia conveniently defines it.

Narratology refers to both the theory and the study of narrative and narrative structure and the ways that these affect our perception.

As I see it, the theory of narratology lists the components of stories (themes, characters, archetypes, etc.) and also describes how stories guide or distort our perception of reality.

It’s the second part that interest me most. It’s the idea that we see the world around us and try and fit it into a narrative—a story to make sense of it all.

This certainly relates to politics and you see it now in the Trump era. Some people look at Trump and a defender of the little guy who will disrupt the corrupt powers that be. Others see a rising fascist who may destroy democracy. Obviously both groups have access same information, the same surrounding reality. How can they come to such disparate conclusions? (Additionally, both sides are manufacturing facts to support their narrative.)

This is where narratology comes in. I believe we have a story in our heads and we force what we see to fit into that narrative.

What do all good narratives need? A good guy and a bad guy. Someone to root for and someone to hate. The different groups have forced the emergence of Trump into their narrative.

(You might be asking me: what do you think of Trump? Check out my latest acid logic article for the answer. In general, I’m wary about him but doubt he’s the end of civilization.)

On a side note, I think narratology is related to health. I’m reminded of a story a friend of mine told me about his grandfather. The man walked into the ER one day, convinced something was wrong with him. He demanded the doctors check him out and they did, wearily reporting that everything was fine. The grandfather insisted it wasn’t and died that night. (I realize this anecdotal story doesn’t really prove my point, but it’s all that comes to mind right now.)

So where do these narratives—these story templates with which we generate our interpretation of reality— come from? Maybe they are, on some level, embedded in our biology. I’m pretty unclear on how this could be possible but Jung, among others, believe it. (I think he did; I’m not an expert.)

Or maybe narratives evolve and are passed culturally through Richard Dawkin’s “memes.”

For the most part, I’m wary of narratives. I think they blind us to the true nature of reality, causing us to make heroes and villains out of what are basically flawed if perhaps unusual and exceptional people. For the most part, I think our narratives fail us. (You can see this especially in numerous conspiracy theories that arise and are easily debunked yet still earn followers.)

Shouldn’t we be upset about Russian hacking regardless of their reason?

So what do we know? We know that during the presidential campaign numerous emails were hacked from computers belonging to the Clinton campaign and the DNC. These emails were handed to the Wikileaks organization who made them public. The effect was that the Clinton campaign was embarrassed for various reasons.

So who did the hacking? Many U.S. intelligence agencies say it was the Russians. That certainly seems like the likely answer.

At the time of the hacking it was a bit unclear why the emails were hacked and made public. A reason often mentioned was that the Russians wanted to undermine the democratic process. Now U.S. Intelligence is saying the reason was to actually help the Trump campaign.

Frankly, is one reason any worse than the other? If Russians are attacking our democracy, shouldn’t we be pissed of regardless of their motivation. Let’s say it turned out they did it as a big prank. Is that a better reason than trying to get Trump in office?

So why are people obviously much more upset with the second reason? I think it’s partly because we are wired to be opposed to unfairness. If the Russians were just hacking to undermine democracy, that doesn’t really favor one group over the other. But if they are playing favorites, that galls (some of) us.

So why does it gall us? What’s the psychological reason? I think it raises the possibility that you can work your ass off and still lose for reasons outside of your control. The landscape is against you.

And that’s a perfectly good reason. But in the big picture, I feel would should be upset about foreign meddling for any reason.

I’m not convinced, however, that were the shoe on the other foot (say the Russians hacked Trump’s servers a released pictures of him having sex with goats) that many now outraged wouldn’t be thanking the Russians.

How the Electoral College affects the psychology of voting

Recently, an online petition circulated demanding that the electoral voters voting for Trump change their vote to Clinton. One argument made was that the electoral voters should do this because Clinton won the popular vote by handy margins.

The counter argument to this was that Trump could say, “Look, I pursued a specific campaign strategy to win this election and that stategy was to win the electoral college. If you now say I needed to win the popular vote, you are changing the rules after we all played the game.” And it would be a fair point.

Now there’s currently a bit of rumbling from some Democrats that the country should get rid of the electoral college. And they’ve got a legitimate grievance. Twice in 16 years a Democrat has won the popular vote but lost the electoral college. It would seem that getting rid of the college would benefit Democrats, no?

But it may not be that simple. We realize that all of a state’s electoral votes go to whoever wins the majority vote in that state. (There are rare exceptions to this.) And certain states reliably swing one way. My state of California is a good example; it always swings towards Democratic candidates. As a result, Republican voters in this state are disincentivized to vote—why vote for your guy when you know he or she has no chance of getting your state’s electoral votes?

However, if we switched to a popular vote, that disincentive disappears. Suddenly a lot of people who might not be that eager to vote have a reason to do so. Suddenly their vote does count. And suddenly political parties have a lot more reason to actively pursue those votes. (Right now, I suspect a lot of Republicans don’t even bother with California.)

Now, does this mean the popular vote would swing more Republican? Obviously there are plenty of Red states where, under a popular vote system, Democratic voters might be more incentivized to vote. The only way to really figure this out would be to examine the populations and voting tendencies of each state and take some educated guesses. I did look up the voting tendencies of the current US population and it’s about an even split between Dems and Repubs. (There are more registered Democrats, but independents tend to slightly swing red which evens it out.) So it’s hard to really predict what the results of a popular vote would be.

The larger point here is that you can’t make predictions about one system based on the results from another. Or, more boldly, don’t fuck with shit unless you really know what you’re doing.

Now, of course, there’s a reasonable, non-partisan argument that we should just switch to the popular vote system because it is more democratic.

Is Kanye West a fan of Scott Adams?

I’m currently working on an acid logic piece taking a look at Scott Adams’ predictions about Trump and seeing how they stand up to the election results. Obviously Adams was right on his main prediction that Trump would become President.

I’m still not quite sure what to make of Adam’s arguments. One comment he makes often is that reality doesn’t exist. Physical reality like molecules might exist (though Adams is at times dubious of that; very Buddhist of him) but social reality and political reality are not real. Trump, I think Adams would say, created a new political reality via masterful powers of persuasion. People who clung to the old reality, who did not see how Trump was changing the rules of the game (I might fall into this category) got played.

I just stumbled across an article about rapper Kanye West going on a long political rant onstage in Sacramento. He make comments that sound Adams-esque.

During his rant West said: “If your old a– keeps following old models, your a– is going to get Hillary Clintoned. You might not like it, but you need to hear it.”

Frankly, that would be a great motto for the Democratic Party going forward.

West, it should be noted, states he is going to run for President in 2020. Before the Trump victory I would have presumed the likelihood of this actually happening to be low. Now I’m not so sure.

Are emotions the necessary antidote to reason?

I’ve been reading a rather dense, philosophical book called “Freedom Evolves” by Daniel Dennett. I’m not sure how much I’m getting out of it but it does have one interesting nugget worth reporting on.

To understand this nugget I have to first describe our general view of reason and emotion. This view is that reason is sort of the antidote to emotion. Men’s emotions run wild and a stern application of reason is necessary to “talk them off the ledge.” (You see this point brought up often during this contentious election cycle.)

The view intimated the Dennett book is that, in fact, emotion is a cure for reason. Evolution “created” emotion to ward of the potential dangers of reason.

What do I mean? Well let’s say you were captured by some fiend and he asked you to make a choice. He was going to kill one person and that person could be your son or some guy in China whom you’d never met. A person operating on pure reason would have trouble with this decision. He or she might factor in the relative ages of these two people, deciding who still had the most life to live. He or she might try to take a guess at what productive things each potential target might do in their lives in order to ascertain who was the most valuable person.

An emotional person (e.g. the rest of us) would say “kill the Chinese guy.” We might be torn about it, but I think that’s the decision most of us would ultimately make because we would would have a strong emotional connection to our child and very little emotional connection to a stranger. (This brings to mind Peter Singer’s “Drowning Child” thought experiment.)

This idea—that emotion helps us make decisions—ties in with the work of Antonio Damasio. In his book, “Descartes’ Error” he described people who, due to some pathology, had lost their ability to really feel emotions. As a result, their decision-making abilities went in the toilet. I think Damasio described a fellow who was fired from his work because he couldn’t prioritize tasks. The boss would tell the guy to finish a report and he would miss the deadline because he spent 8 hours arranging staplers. He could not prioritize because every option had equal emotional “weight” (and that is to say, none.)

This also ties in with some fears I’ve seen expressed about artificial intelligence. The concern is that A.I. might be programed to do some task like construct a new kind of material and then decide that human bones are the best source for this new material and therefore the A.I. would instigate massive genocide to farm for human bones. It would do this because it would operating using only logic and no emotional weighting. (I’m using a vastly simplified example of this fear, but you get the drift.)

What separates man from machine?

I just finished an article on computers writing music which got me thinking about computers thinking. (Thinking being a big part of music writing.) When humans compose music, or write stories, or do any artistic pursuit, we are consciously processing our decisions, deciding to do this or that or try this or that idea. If computers can start to replicate these processes, they would be doing them unconsciously. (Unless we want to consider, as some have, that computers are conscious in some weird way, but that’s a debate for another time. For now I will presume they are not conscious.)

So let’s think about this. Let’s say I’m writing a story about a character named Bob who drives his car a lot. In my mind, Bob is a person and his car is an object an I shuffle them through various scenarios that create tension in fiction etc. How would a computer approach writing a story about Bob and his car. (Computers writing fiction is not that far off.)

Well, computers would never really be aware of Bob and his car. Ultimately a computer is simply turning the states of millions of transistors from on to off or vice versa. Bob would essentially just be several bytes worth of data, data simply being a collection of transistors in various states. All words—nouns, verbs, adjectives, etc.—are simply data captured by the state of transistors. The point being the computer never really knows the meaning of the words. At best it “knows” the flow of electricity (and even that statement is a stretch.)

Essentially, computer programs map symbols (letters, music notes, patches of color, etc) on to these transistors. And then they manipulate these mapped symbols to do various things, one of which is to create art. The symbols only have meaning to the audience, which is us humans. In a sense, a novel writing by a computer could be said to not exist until a conscious human reads it.

So what makes us different from computers? We are conscious, obviously, but also these symbols have actual meaning to us. The word “Bob” can have an actual meaning, referring to particular guy, fictional or not, who has various behavioral tendencies, characteristics, a certain appearance etc. To us, Bob (the word) can represent a real person. We can map symbols to ideas/concepts/entitities.

And yet, our brains work in a way pretty similar to computers. Our neurons are powered by electricity and we, in some weird way, hold information in our synapses. So why do we humans experience meaning when computers don’t?

I dunno…