The kitchen of the future could see all our fancy devices – even refrigerators and ovens – replaced by a 3D printer which will create meals from cartridges full of carbohydrates, protein powders and oils.
While only the very rich will be able to afford to eat real meat, fish and vegetables, Contractor predicts everyone else will eat customized, nutritionally-appropriate meals synthesized one layer at a time, from cartridges of powder and oil they buy at the corner grocery store.
With traditional food sources extremely rare, those powders could be anything containing the right organic molecules including insects.
Dutch technology company TNO Research has suggested that 3D printing could make it possible to turn food-like starting material, such as algae, insects and grass, into edible meals.
Basically, it’s the idea that food is made up of core components that can be “assembled” by a printer.
Reminds me a bit of this guy who claims to have invented a drink that provides all the essential nutrients a person needs to stay alive. What does he call it? “Soylent”, of course!
Both articles touch on an interesting idea: food hacking — the notion that we can take food apart and then reassemble it, or assemble new kinds of food from components not necessarily thought of as food. It’s a crazy world in which we live.
In journalism, it’s common for a writer to ask a somewhat rhetorical question and then answer it using a quote from one of the story’s sources. I was struck by this use of the technique in a recent New Yorker article on cyber crime.
So is there any solution to our cyber problem? Every advance in connectivity and mobility seems to increase the possibilities for crime.
In the past, I’ve mentioned my sense that 3-D printing — a technique to manufacture a variety of useful objects using a device that can fit into your office or garage — could radically alter economies of the future. If you can print out a desk, why buy one from IKEA, and thus why would IKEA employ their own workforce? Suddenly you have a lot of unemployed people.
Of course, one might say that 3-D printers only endanger the jobs of people who manufacture things that can be printed out as one unit, not complex things that have to be assembled. (Though, my understanding is that 3-D printers can print out pretty complex, interlocking objects. There is a 3-D printed car, though I believe it requires a certain amount of assembly.) In theory, assembly line type jobs would be somewhat safe. However, I just stumbled across this interesting video on a new robot that can be easily trained to do repetitive work. You program it by simply by guiding it through the motions of what you want to do, and then the robot will keep doing those motions. And it only costs $20,000. (And doesn’t take cigarette breaks. It could work 24 hours a day, I suppose.)
Could robots easily replace humans for assembly-line tasks? Let’s recall how well humans have fared at such work.
In the past, I’ve mentioned Jaron Lanier and his condemnations of Internet culture. I just stumbled on this article by George Saunders (who is apparently some kind of intellectual) which carries the topic forward. He bemoans the way the Internet “reprograms” your brain in such a way that you always have a little voice telling you there’s something else (on the Internet) you should be doing.
I do know that I started noticing a change in my own reading habits – I’d get online and look up and 40 minutes would have gone by, and my reading time for the night would have been pissed away, and all I would have learned was that, you know, a certain celebrity had lived in her car awhile, or that a cat had dialled 911. So I had to start watching that more carefully. But it’s interesting because (1) this tendency does seem to alter brain function and (2) through some demonic cause-and-effect, our technology is exactly situated to exploit the crappier angles of our nature: gossip, self-promotion, snarky curiosity. It’s almost as if totalitarianism thought better of the jackboots and decided to go another way: smoother, more flattering – and impossible to resist.
Twitter is a deliberate abstention. Somehow I hate the idea of there always being, in the back of my mind, this little voice saying: “Oh, I should tweet about this.” Which knowing me, I know there would be. I’m sure some people can do it in a fun and healthy way, but I don’t think I could. Plus, it’s kind of funny – I’ve spent my whole life learning to write very slowly, for maximum expressiveness, and for money. So the idea of writing really quickly, for free, offends me. Also, one of the simplify-life things I’m doing is to try to just write fiction, period. There was a time there a few years back where I was writing humour, and screenplays, and travel journalism so on – just trying to keep the juices flowing and kick open some new doors. These, in turn, led to a period of sort of higher public exposure – TV appearances here in the US and some quasi-pundit-like moments. To be honest, this made me feel kind of queasy. I’m not that good on my feet and I found that I really craved the feeling of deep focus and integrity that comes with writing fiction day after day, in a sort of monastic way. So that’s what I’m trying to do now, as much as I can manage. And Twitter doesn’t figure into that.
I’ve been doing a little more thinking about the possibilities of three dimensional printers. I think the ramifications of these devices are staggering.
To clarify, these “printers” are really more like little miniature factories that fit in your home. Using computer files as schematics, 3D printers can use liquid plastics or metals to create layers of an object that can then be bound together. (Here’s a wiki on 3d printing.) So, users can create any object that they can get a schematic for (and that can be reasonably “printed” by their printer.)
I think it’s reasonable to presume that a lot of schematics for patented designs will make their way to the web, in the same manner that pirated music and movies have thrived on the web. And it strikes me that many industries could be disrupted by 3D printing in the same way that the music industry has been disrupted by mp3s. Let’s take the game of Scrabble. Right now, if I want to play a board game I have go to the store and buy a physical copy. But what if I could download a schematic for Scrabble and use my 3D printer to print out the board and tiles? Why, then, buy the real thing like a sucker?
But what does that mean? That game makers go out of business? That their employees become jobless and starve in the streets, forced into a life of prostitution? Maybe… probably.
Let’s take another object, a car. Could a 3D printer print a car? Not now, obviously; cars are too big. But maybe we’ll have “printers” in 20 years that can print huge objects. Cars are, of course, complex objects with a lot of connecting parts, more than one person could assemble. But what if we have printers that can not only print out the parts of an object but assemble them? Why not print out a car then?
And then what? Does Detroit go out of business? Will the streets be overcome with destitute, jobless Americans hungry for human flesh? There is no doubt that this is the future.
The other day I was importing some photos and videos from my Dad’s iPad onto his Mac computer. The import error-ed out at some point and for a while I couldn’t find the files on either the computer or iPad. Finally I found them in a folder called 2011 which was odd since many of these photos had been shot last week.
Anyway, it was an absurd experience since it shouldn’t be that hard to find files that were recently added to a computer. It seems to me there should just be a natural language interface for this sort of thing – some help app where you can basically type in, “Where are the files I just added to the machine?” Or you could ask other questions in a natural language way like “Show me all files with the word “donkey” in them.” At this point in time this seems like something computers should be able to do.
In fact, if computers are as smart as their proponents say they are, we should be able to ask more advanced questions. “What is the true spiritual nature of the universe?” “Do all the conscious beings in the universe contribute to some greater universal consciousness?” “What is the precise nature of being?” Why the fuck aren’t computers answering those questions!
One could argue that the timeline of my life has coincided almost precisely with the emergence of a meaningful Internet. (I use the term “meaningful Internet” to differentiate the Internet of the modern era from the Internet that existed as a tool used by the Department of Defense as far back as the 50s.) When I was a teenager, we had computer bulletin boards. (Interestingly, I remember a guy giving me several floppy disks worth of video games he downloaded illegally from said bulletin boards. Even back then file sharing was a problem.) Then, in my 20s, we saw the emergence of the web browser and web graphics. As I’ve matured and become wiser and only better looking, we’ve seen the advent of social networking, Internet enabled phones etc.
And I find, as time has gone on, I become more wary of Internet. I become weary of the overwhelming access to information it offers, the endless distractions it flaunts. I find myself genuinely yearning for an Internet free existence, but realize that would vastly limit my employment options and general swinging lifestyle.
Jaron Lanier is a guy I’ve read about and been interested in. He contributed quite a bit to the Internet technologies referred to as “Web 2.0″ and since then has largely condemned the Internet. His reasons are numerous (he has a book on the topic) but in particular he condemns the philosophy of “information wants to be free.” In relation to music file sharing, he says:
“I’d had a career as a professional musician and what I started to see is that once we made information free, it wasn’t that we consigned all the big stars to the bread lines.” (They still had mega-concert tour profits.)
“Instead, it was the middle-class people who were consigned to the bread lines. And that was a very large body of people. And all of a sudden there was this weekly ritual, sometimes even daily: ‘Oh, we need to organize a benefit because so and so who’d been a manager of this big studio that closed its doors has cancer and doesn’t have insurance. We need to raise money so he can have his operation.’
“And I realized this was a hopeless, stupid design of society and that it was our fault. It really hit on a personal level—this isn’t working. And I think you can draw an analogy to what happened with communism, where at some point you just have to say there’s too much wrong with these experiments.”
He also makes a point that I’ve made (and thus can be presumed to be quite wise): that anonymity on the Internet has led people to become ugly and wicked in their political disputes. Instead of causing us to come together, the Internet is forcing people to calcify in their own tribes.
At last we come to politics, where I believe Lanier has been most farsighted—and which may be the deep source of his turning into a digital Le Carré figure. As far back as the turn of the century, he singled out one standout aspect of the new web culture—the acceptance, the welcoming of anonymous commenters on websites—as a danger to political discourse and the polity itself. At the time, this objection seemed a bit extreme. But he saw anonymity as a poison seed. The way it didn’t hide, but, in fact, brandished the ugliness of human nature beneath the anonymous screen-name masks. An enabling and foreshadowing of mob rule, not a growth of democracy, but an accretion of tribalism.
It’s taken a while for this prophecy to come true, a while for this mode of communication to replace and degrade political conversation, to drive out any ambiguity. Or departure from the binary. But it slowly is turning us into a nation of hate-filled trolls.
About a month ago, there was a video going around the web of a eagle attempting to fly off with a little kid. The video was quickly revealed to be phony, but it was realistic enough for what was attempting to portray.
Now, a couple days ago, I saw what was purported to be a video from an traveling Highway Patrolmen’s car which caught another car being smashed bits by large truck. I saw the video and found myself thinking, “but is this real?” A few things bugged me. How would this video have been released to the public? Is that really what a car looks like when it collides with a truck?
Now, ultimately I think that video probably was real. But the fact that I wondered about it forced me to consider what I suspect will soon be a major conundrum in the world. Technology will become so good at rendering realistic looking scenarios that we will question everything we see on television or the Internet. The truth will become even more opaque as people and organizations create scenarios that support their particular view of the world. We will be able to trust no one.
And I think it gets even worse. Eventually, technology will allow us to interact directly with a person’s brain in such a way that we will be able to alter the entire sensory world that they perceive. How will we know whether something is real or being placed there by those out to mock or hurt us?
Imagine the following: there’s some nerdy guy in high school. He doesn’t have a lot of friends and spends a lot of his time with computers. Never goes on dates. One day, a beautiful French foreign exchange student comes into the computer lab and starts talking to him. He’s amazed that she shows any interest in him, and eventually they fall in love. Immediately after high school they get married. He goes on to a successful career as a computer software developer. Throughout their many decades together she loves and supports him. On his deathbed, he looks up at her while she holds his hand and says, “Honey, if you hadn’t come into my life I don’t know what I would’ve done.” Then, suddenly she disappears and the nerd realizes that this entire phony life has been placed into his brain by jocks and preppies (the archenemies of nerds) seeking to humiliate him. As he is revealed to be a young man back in high school, his mind literally melts and his cohorts laugh at his misery. “HAWHAWHAWHAWHAWHAWHAW!!! Take that, NERD!!!” they scream.
It’s going to be a strange upcoming couple of centuries.
For top-secret reasons too lengthy to be discussed here, I was researching lie detector tests today. That led me to an interesting Wikipedia article on something I’ve never heard of: brain fingerprinting. Basically, the idea is that when you are confronted with a word or image that you’re familiar with, your brain reacts differently than it would if the object was unfamiliar. And there are techniques and tools that can measure these brain changes. So, for instance, if you suspected a man was sleeping with your wife, you could kidnap him, drag him to your secret compound, hook him up to the brain fingerprinting device and show him an image of your wife. If his brain insinuated that he was familiar with her visage, you could confidently torture him to death. There may be other uses as well.
Here’s a bit of info…
Brain fingerprinting was invented by Lawrence Farwell. The theory is that the brain processes known and relevant information differently from the way it processes unknown or irrelevant information (Farwell & Donchin 1991). The brain’s processing of known information, such as the details of a crime stored in the brain, is revealed by a specific pattern in the EEG (electroencephalograph) (Farwell & Smith 2001, Farwell 1994). Farwell’s brain fingerprinting originally used the well-known P300 brain response to detect the brain’s recognition of the known information (Farwell & Donchin 1986, 1991; Farwell 1995a). Later Farwell discovered the P300-MERMER (“Memory and Encoding Related Multifaceted Electroencephalographic Response”), which includes the P300 and additional features and is reported to provide a higher level of accuracy and statistical confidence than the P300 alone (Farwell & Smith 2001, Farwell 1994, Farwell 1995b, Farwell et al. 2012). In peer-reviewed publications Farwell and colleagues report less than 1% error rate in laboratory research (Farwell & Donchin 1991, Farwell & Richardson 2006) and real-life field applications (Farwell & Smith 2001, Farwell et al. 2012). In independent research William Iacono and others who followed identical or similar scientific protocols to Farwell’s have reported a similar low level of error rate and high statistical confidence (e.g., Allen & Iacono 1997).
To ensure accuracy and statistical confidence, brain fingerprinting tests are conducted according to specific scientific standards, which are specified in Farwell 2012 and Farwell et al. 2012).
Brain fingerprinting has been applied in a number of high-profile criminal cases, including helping to catch serial killer JB Grinder (Dalbey 1999) and to exonerate innocent convict Terry Harrington after he had been falsely convicted of murder (Harrington v. State 2001). Brain fingerprinting has been ruled admissible in court (Harrington v. State 2001, Farwell & Makeig 2005, Farwell 2012).
Of course, the technique has been criticized. If you’re interested, you can read details at the article.
A couple of nights ago, my brother and I were watching an old Western movie, “The Return of Frank James.” There was a young man playing a role in the movie who seemed really familiar, but we couldn’t figure out who he was. Finally we realized it was former child star Jackie Cooper.
This got me thinking of the general predicament of watching movies and finding yourself puzzling over who a certain actor is. It seems that with the advent of facial recognition software, this kind of puzzlement should be a thing of the past. Let’s say someone creates a database which contains facial information for all famous people. When you’re watching a movie, the broadcast can essentially whittle down the list of potential facial recognition targets to only the actors in the movie. At which point, you should be able to just point to the character on the screen with some kind of laser pointer, and that would provide the name of the actor and perhaps even bring up pertinent facts about his or her career.
Actually, the idea that your television should be able to recognize a human face on the screen brings to mind all sorts of possibilities. For instance, it would be great to watch a scene in a movie and attach a Groucho Marx glasses mustache contraption to the actor. Or paint in devil horns. We’re almost in the year 2013, why are we not doing this?
The most obvious use of this technology is the following: we should be able to interact with the screen in such a way that we can drag giant cartoon penises around and poke them at the faces of the actors on screen. So you’ve got some dramatic actress moping about the loss of her children in “Sophie’s Choice” or doomed Holocaust victims condemning Germans in “Schindler’s List” and meanwhile cartoon penises are bumping off their chin.
You might be saying, “Wil, this is silly. What is the point of such technology?” To which I argue that manipulating cartoon penises on screen is clearly the wave of the future. You can either get on board, or be left behind.