Archive for the 'Technology' Category
April 4th, 2017 by Wil
I’ve often talked here about why I think certain technological developments, namely AI, robotics and 3D printing could radically alter the landscape of employment. I am, of course, hardly the first person or only person to discuss this.
This Salon article is a worthy addition to the debate. The article posits that personal manufacturing robots and 3D printers could allow people to become a factory of one. Have you always wanted to produce and sell a line of rubber figurines in the form of the Loch Ness Monster? With your own personal manufacturing robot you could do so from your basement.
The article states:
This is already beginning to happen. In 2014, there were more than 350,000 manufacturing companies with only one employee, up 17 percent from 2004. These companies combine globalization and automation, embracing outsourcing and technological tools to make craft foods, artisanal goods and even high-tech engineered products.
Many American entrepreneurs use digitally equipped manufacturing equipment like 3D printers, laser cutters and computer-controlled CNC mills, combined with market places to outsource small manufacturing jobs like mfg.com to run small businesses. I’m one of them, manufacturing custom robotic grippers from my basement. Automation enables these sole proprietors to create and innovate in small batches, without large costs.
An interesting idea. Nonetheless, it feels somewhat utopian, doesn’t it? Are we really going to counter-balance the rise in unemployment caused by robots and 3D printers by turning households into small manufacturing units? This might work for a small subset of people, but it seems unlikely to be salve to the larger problem.
A commenter on the post makes a funny and similar point:
Some good points, but this techno-hipster bullcrap about the future being dufus hipster makers with at home 3D printers and trained on LEGO Mindstorms making artisanal pickle jar openers being the future only serves those who are selling the hipster shovels.
February 3rd, 2017 by Wil
I thought I would get a little more informed about the state of life extension technology and dug up this Guardian article on various drugs and techniques being investigated. This section stands out.
One of the more unusual approaches being tested is using blood from the young to reinvigorate the old. The idea was borne out in experiments which showed blood plasma from young mice restored mental capabilities of old mice. A human trial under way is testing whether Alzhemier’s patients who receive blood transfusions from young people experience a similar effect. Tony Wyss-Coray, a researcher at Stanford leading the work, says that if it works he hopes to isolate factors in the blood that drive the effect and then try to make a drug that does a similar thing. (Since publishing his work in mice, many “healthy, very rich people” have contacted Wyss-Coray wondering if it might help them live longer.)
As I age, I often find myself looking at the supple bodies of young people and musing on how I would like to drink their blood. It’s good to see I’m not alone.
The last sentence in that quoted paragraph touches on what got me thinking about this. In some sense, death is the great equalizer. Are we about to enter an era where income inequality will correspond with lifespan inequality? Technically, I think we are already there though the disparity is minimal.
It seems a near certainty that in the future the aged wealthy will be kidnapping young people off the streets and harvesting their blood.
January 31st, 2017 by Wil
Elon Musk has been talking about the concept of a neural lace. This is basically a wire mesh inserted into the brain which enambles direct communication with populations of neurons. To quote this article…
…the neural lace is a device that is intended to grow with your brain. Its primary purpose is to optimize mental output through a brain-computer interface, allowing the human brain to effortlessly access the internet and, thus, keep up with (and someday merge with) artificially intelligent systems.
Micheal Chorost described something similar in his book “World Wide Mind” which I discussed here.
Today I was musing on the following scenario. Could we insert neural laces into the brains of dogs and then connect those canine brains to various A.I. brain augmentation devices such that dogs would then become smart enough to communicate with us? Are talking dogs the first sign of the singularity?
Bark once if you agree!
January 24th, 2017 by Wil
So recently I’ve been interested in the topic of what a conscious, living creature really is. To quote myself…
If I’m right, living people are sort of like a computer with the power on. Our brains have an architecture which is the arrangement of our neurons (the connectome.) When that architecture has “juice” running through it, you have a living, talking person. When that juice is taken away, you have—you got it—a dead person (similar to a computer with the power off.)
Today I stumbled onto this…
In a paper published in Plos One in early December, scientists detailed how they were able to elicit a pattern similar to the living condition of the brain when exposing dead brain tissue to chemical and electrical probes. Authors Nicolas Rouleau, Nirosha J. Murugan, Lucas W. E. Tessaro, Justin N. Costa, and Michael A. Persinger (the same Persinger of the God-Helmet studies) wrote about this breakthrough,
This was inferred by a reliable modulation of frequency-dependent microvolt fluctuations. These weak microvolt fluctuations were enhanced by receptor-specific agonists and their precursors[…] Together, these results suggest that portions of the post-mortem human brain may retain latent capacities to respond with potential life-like and virtual properties.
That’s just a fancy way of saying it might be possible to bring dead brain tissue back to life, sort of.
This is a far cry away from reactivating a dead person and nothing here really implies that would ever be possible. But it does play into the theory I posted above. You could say they turned the juice back on.
January 13th, 2017 by Wil
Lately I’ve been exploring this idea that we don’t know what consciousness is. I considered the the possibility that consciousness could be some kind of “force.” My theory was that when this force travels through a complex network, like our human brain, it/we/something experiences what we call subjective consciousness.
I also asked: could this force simply be electricity (or the electromagnetic force?) It seems all too simple and rather Frankenstein-ian. I’ve done a bit of reading and the consensus seems to be “no” though I need to read more.
One of the articles I read had some juicy tidbits on past experiments of applying electricity to the dead.
WIRED: What Happens If You Apply Electricity to the Brain of a Corpse?
In 1802, Aldini zapped the brain of a decapitated criminal by placing a metal wire into each ear and then flicking the switch on the attached rudimentary battery. “I initially observed strong contractions in all the muscles of the face, which were contorted so irregularly that they imitated the most hideous grimaces,” he wrote in his notes. “The action of the eylids was particularly marked, though less striking in the human head than in that of the ox.”
In 1803, he performed a sensational public demonstration at the Royal College of Surgeons, London, using the dead body of Thomas Forster, a murderer recently executed by hanging at Newgate. Aldini inserted conducting rods into the deceased man’s mouth, ear, and anus.
One member of the large audience later observed: “On the first application of the process to the face, the jaw of the deceased criminal began to quiver, the adjoining muscles were horribly contorted, and one eye was actually opened. In the subsequent part of the process, the right hand was raised and clenched, and the legs and thighs were set in motion. It appeared to the uninformed part of the bystanders as if the wretched man was on the eve of being restored to life.”
January 3rd, 2017 by Wil
With the advent of artificial intelligence (AI) there’s a lot of talk about computers knowing things, or processing information. But how does this actually work?
I’ll be upfront here and say, “I don’t know,” at least in any detailed sense. But thinking out loud on the topic might turn up some interesting observations.
Computers have been information processing for ages (and before computers, calculators, abacuses etc. were doing it.) With AI, computers are simply processing information better, faster and “deeper” than ever before.
But what is really going on when we say a computer processes “information”? What information?
Let’s first consider the notion of a “bit.” The term comes from the relatively recent discipline of information theory and refers to the smallest unit of information possible. In essence, it’s a yes or no question. For example, let’s say I was tracking information about the couches in my couch factory. These couches come in three colors—red, green and orange. So I could track that information in three bits: a bit that gets marked “yes” if the couch is red, a bit that gets marked “yes” if the couch is green and a bit that gets marked “yes” if the couch is orange. Actually I could get away with using only two bits by saying, “if the red bit is set to no and the green bit is set to no then the couch must be orange.”
When you look out at the world, you can basically describe it using bits. Look at your best friend. Are they male, yes or no? Do they have a mustache, yes or no? Do they read this blog, yes or no? Are they gay, yes or no? And on and on…
You can see how this can be a remarkably effectively tool, and this tracking of bits is what drives computing. For example, images can be “held” in a computer if you track the red, green and blue value (represented as a number which can be captured as a series of bits*) for each pixel, plus, I think, luminescence and maybe a few other things.
* More detailed explanation here, if you care.
But it’s key at this point to take a step back and realize that just because computers hold information about couches, best friends or images, that doesn’t mean they really know anything. They know nothing, because they are basically dumb electrical signals shuffling around. A computer knows the image it contains no more than an abacus knows the number value it just helped add. Both tools require a human being to come along and observe the information being represented. Without the human, a computer’s information is a bunch of yeses or nos, devoid of context or purpose.
I’m pretty sure some information theorists would disagree with some of what I’ve said here, but this is how I see it.
So that makes us feel pretty special as humans, right? We know stuff whereas these dumb computers just sit there twiddling their switches. But do we really know anything?
Like computers, we also seem to hold information in bits of a sort. We have neurons and they fire or they don’t*. (I believe I’m correct in saying neurons can actually impart more than just yes or no values because they can fire at different strengths. To be honest, I’ve never really been clear about that but for the purposes of this post we merely need to agree that neurons hold information in some way.) So, you observe a coffee cup and various neurons that activate for round shapes start firing, as do neurons that activate for the smell of coffee, past memories of coffee, the general sense of being amped up and awake and on and on. Our brain “represents” the coffee cup using a lot of bits… I dunno how many. And we are aware of this represented information with different degrees of awareness. I might be strongly conscious of the notion: that is a coffee cup, but I’m less aware of the sense that coffee tastes bitter, or that it has caffeine.
*I’m aware that information in brains is really held in the connections between neurons (synapses), but I think this explanation works for our purposes.
My point here, and I do have one, is this: with computers, we track information about objects (or concepts or whatever) but we understand that that information is meaningless until a conscious agent, probably a human, comes along and observes it. But brains also track bits of information. So who/what is the conscious agent that is required to observe that information in our brains and “convert” it from meaningless bits to useful information? This could be another way of asking, “What is consciousness?”
While thinking about this I stumbled across this interesting quora question with fascinating answers (though no conclusive answers.) How much information does a human brain neuron store?
December 16th, 2016 by Wil
I’ve been reading a fascinating book entitled “This Rise of the Robots.” It is, as you might suspect, all about celery gardening.
I jest, of course. It is about robots and how the automation of physical and mental work threatens our economy.
There’s a interesting little premise that barely gets a mention in the book (so far) but seems worth describing here. We all understand that a lot of physical labor in product manufacturing has been replaced by robots. This has been easy because manufacturing involves a lot of repetitive tasks which robots excel at. What is harder for robots to do is diverse physical labor like what a janitor does. A janitor doesn’t repeat the same exact task over and over again. He might clean one bathroom, then mop a floor, then clean some windows, then throw out some boxes etc. And he might does this in a variety of buildings with different floor plans etc. Getting a robot to navigate all these tasks is still difficult. For one thing, while technology for robots to “see” is improving, it’s still not perfect.
But imagine this. You make an ambulatory robot with cameras attached and “grabbing” mechanisms as hands. You give control of this robot, via the internet, to some guy in an Indian call center. The Indian guy, payed a pittance, provides the seeing and motion control for this robot. By running the robot, probably via something like a video game interface, the Indian does the work.
Could it be that this process of human/robot collaboration could put real, first world janitors out of work (especially with the push to raise the minimum wage)?
But wait. As they say, it gets worse. As this robot is being guided through the janitorial tasks isn’t it being trained to do these jobs all by itself? For example, let’s say the Indian guy guides the robot to wash the windows in a particular building once a week. As he guides the mechanical appendages through the process of window washing, the robots records them. Next week it can do this job all by itself. Really, the human is only needed to train the robot a few times, then the robot takes over. (This is sort of how the robot Baxter, working in factories as we speak, operates.)
And don’t get me started on remote controlled robot prostitutes!
November 2nd, 2016 by Wil
I stumbled across an interesting article discussing how scientists are rendering data in musical form. This, apparently, allows them to sense patterns in the data they might otherwise be unaware of.
Scientists can listen to proteins by turning data into music
Transforming data about the structure of proteins into melodies gives scientists a completely new way of analyzing the molecules that could reveal new insights into how they work — by listening to them. A new study published in the journal Heliyon shows how musical sounds can help scientists analyze data using their ears instead of their eyes.
The researchers, from the University of Tampere in Finland, Eastern Washington University in the US and the Francis Crick Institute in the UK, believe their technique could help scientists identify anomalies in proteins more easily.
“We are confident that people will eventually listen to data and draw important information from the experiences,” commented Dr. Jonathan Middleton, a composer and music scholar who is based at Eastern Washington University and in residence at the University of Tampere. “The ears might detect more than the eyes, and if the ears are doing some of the work, then the eyes will be free to look at other things.”
If you don’t fully comprehend what this all means, well, I’m right there with you. But one can easily envision a way that different values of data could be thought of as steps away from a average value, and those steps could be represented as a musical scale. So really large musical leaps would indicate major deviations from an average.
And here’s another article also about data being transformed into music. I guess this is a “thing.”
Detecting patterns in neuronal dendrite spines by translating them into music
There’s some example of this “dendritic spines as sound” music here and it’s pretty unappealing. (Part of the problem is that it’s rendered with hideous midi instrumentation.)
October 18th, 2016 by Wil
I just finished an article on computers writing music which got me thinking about computers thinking. (Thinking being a big part of music writing.) When humans compose music, or write stories, or do any artistic pursuit, we are consciously processing our decisions, deciding to do this or that or try this or that idea. If computers can start to replicate these processes, they would be doing them unconsciously. (Unless we want to consider, as some have, that computers are conscious in some weird way, but that’s a debate for another time. For now I will presume they are not conscious.)
So let’s think about this. Let’s say I’m writing a story about a character named Bob who drives his car a lot. In my mind, Bob is a person and his car is an object an I shuffle them through various scenarios that create tension in fiction etc. How would a computer approach writing a story about Bob and his car. (Computers writing fiction is not that far off.)
Well, computers would never really be aware of Bob and his car. Ultimately a computer is simply turning the states of millions of transistors from on to off or vice versa. Bob would essentially just be several bytes worth of data, data simply being a collection of transistors in various states. All words—nouns, verbs, adjectives, etc.—are simply data captured by the state of transistors. The point being the computer never really knows the meaning of the words. At best it “knows” the flow of electricity (and even that statement is a stretch.)
Essentially, computer programs map symbols (letters, music notes, patches of color, etc) on to these transistors. And then they manipulate these mapped symbols to do various things, one of which is to create art. The symbols only have meaning to the audience, which is us humans. In a sense, a novel writing by a computer could be said to not exist until a conscious human reads it.
So what makes us different from computers? We are conscious, obviously, but also these symbols have actual meaning to us. The word “Bob” can have an actual meaning, referring to particular guy, fictional or not, who has various behavioral tendencies, characteristics, a certain appearance etc. To us, Bob (the word) can represent a real person. We can map symbols to ideas/concepts/entitities.
And yet, our brains work in a way pretty similar to computers. Our neurons are powered by electricity and we, in some weird way, hold information in our synapses. So why do we humans experience meaning when computers don’t?
September 29th, 2016 by Wil
I continue to track Scott Adams’ blog to view his predictions about Trump. But today he tackled a different subject and he made note of something I’ve thought about myself.
Another key part of my prediction is that the Caliphate will start to weaponize hobby-sized drones for attacks all over the world.
Drones, these little miniature helicopters that are popping up all over the place, seem the perfect vehicle to lob explosives into crowds of people. I’m not talking about the rather small drones you see at the park, but the kind of drones that can have cameras mounted on them, or the kind Amazon is testing for deliveries. If it can carry a package it can certainly carry a bomb. And if the sky is awash with Amazon drones how will we tell the legal drones from ones delivering death? (There may actually be a way, in fact, I presume there is, but it complicates things.)
I think the bigger picture here is this realization: for every new technological advance from now on, we need to ask: how could this be employed by terrorists? Certainly robots and drones have obvious implications for terrorists. So too do advances in the biological sciences like the designing of virus in a garage laboratory. And what can people create with 3d printers?
On a side note, recall that it was the Japanese in WWII who first utilized drone bombs.