The world we see in the movie Her isn’t far off
“Two weeks ago, I watched the movie Her in preparation for an interview by a Brazilian newspaper. I knew I would find something closer to science fiction than reality, but the movie does have a foundation in reality. It was particularly interesting to see that the future depicted in the movie shows a sincere attempt to reconcile technology evolution with things our eyes and hearts can recognize, like handwritten letters and wooden furniture
Thirty years ago, when the Apple Macintosh was unveiled to the world, it was considered revolutionary. In 30 years, Apple managed to build a phone whose computational capacity is almost 200 million times more powerful than the first Macintosh. Projecting 30 years from now, the idea that we’ll have a molecule-sized computer, some billion times more powerful than the iPhone of today, isn’t as crazy as you may think.
The fundamental issue I see with these bold predictions is that computational power isn’t enough. Today’s most powerful supercomputer would be able to simulate one second of one percent of our brain, but it would take more than half an hour to do so. I have no doubt that, in terms of FLOPS, by 2045 we’ll have insanely fast computers, but the right software, the one that can run like the brain, is also necessary. Our comprehension of how the brain works and how to build software that mimics it will need to evolve exponentially as well.
From initiatives like the European human brain project, quantum computers (that might prove to be AI accelerators), nanotechnology, and serious advances in neuroscience that are already happening, I believe we’ll have examples of strong AI in less than 30 years or at least AI agents that task themselves with learning all about our universe and its mysteries.”
Cinema has long been enamoured with the ideas of robot sentience, Artificial Intelligence, transferring the human mind into machines, or more recently, uploading consciousness into the ‘cloud’.
From Metropolis in 1927 there is a continuous arc to the Matrix movies and the yet to be released Transcendence which raises the (for some philosophers, very real) issue of “The Singularity” – “a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence, radically changing civilization, and perhaps human nature”.
Arnie addressed the issues of AI and the Singularity with great sensitivity and intellectual rigour in the Terminator series, but who could forget Hollywood’s most cerebral exploration of AI in Weird Science.
But I reckon artificially intelligent robots and cloud based computer overlords are the easy bit. If we want to set ourselves a real challenge, your weekend mission, should you choose to accept it, is to design a PAPS – a Perfect Artificial Pain System.
Where would you start? What features would be essential? How would it work? How would you turn it on? How would you turn it off? Would it be the same as human pain, or how would it be different? What base components would you need?
No one knows the answer so you can’t be wrong!
Get creative and get cracking – you’ve got 48 hours.
Get up to date and get your think on at a noigroup course
It’s not so much about the system that one might construct -I think that would be relatively easy – but the rewriting of the modulations of the interpretations to be drawn from the information gathered. The building blocks of Consciousness …….now there’s a weekends work and a bit !!!!!!
My thoughts in writing this post were in part influenced by a chapter by Austen Clark in “Pain: New Essays on Its Nature and the Methodology of Its Study” edited by Murat Aydede. Clark discusses some work of Paul Brand:
In fact people have tried to develop prosthetic pain systems, and have run up against precisely this barrier. The nerve damage caused by leprosy leaves the patient insensitive to pain in feet and hands. Paul Brand showed that flesh does not rot in leprosy, nor do fingers or toes fall off; instead all the damage is self inflicted, and arises because the patient does not feel any pain. With colleagues he put in an NIH proposal to develop a prosthetic pain system, using pressure sensors in gloves or socks and a warning signal to alert the patient if some activity was damaging. Despite years of effort the system failed, and the reasons for its failure are quite instructive. Brand says
“Patients who perceived “pain” only in the abstract could not be persuaded to trust the artificial sensors. Or they become bored with the signals and ignored them. The sobering realization dawned on us that unless we built in a quality of compulsion, our substitute system would never work. Being alerted to the danger was not enough; our patients had to be forced to respond. Professor Tims of LSU said to me, almost in despair, “Paul, it’s no use. We’ll never be able to protect these limbs unless the signal really hurts. Surely there must be some way to hurt your patients enough to make them pay attention.” (Brand & Yancey 1993, 194)
So they decided to make the signal painful: a high voltage, low current electric shock to the armpit, a place where most leprosy patients could still feel pain. But even that didn’t work very well; he says most patients saw the shocks as punishment for breaking rules, rather than signals of danger one would naturally want to avoid. He says:
“In the end we had to abandon the entire scheme. …Most important, we found no way around the fundamental weakness in our system: it remained under the patient’s control. If the patient did not want to heed the warnings from our sensors, he could always find a way to bypass the whole system … Why must pain be unpleasant? Why must pain persist? Our system failed for the precise reason that we could not effectively duplicate those two qualities of pain. They mysterious power of the human brain can force a person to STOP!–something I could never accomplish with my substitute system. And “natural” pain will persist as long as danger threatens, whether we want it to or not; unlike my substitute system, it cannot be switched off.” (Brand & Yancey 1993, 195-6)
-Painfulness is not a Quale, Austen Clark in Murat Aydede (ed), Pain: New Essays on Its Nature and the Methodology of Its Study. Cambridge, MA: MIT Press, 2005, pp. 177-197.
-Brand, P. and Yancey, P (1993). Pain: The Gift Nobody Wants. New York: Harper Collins.
There are some really important ideas in the above passages for me; the notion of pain as an imperative with compulsive value. The importance of pain being able to grab attention. Notions of punishment and control. Maybe the beginnings of an understanding of the evolutionary and biological advantage of a system (nociception) that sensitises over time to stimuli rather than attenuate to them.
If pain can be conceptualised as a conscious correlate of threat, if we were to start building a system to do this there are some fundamental questions first; threat to what? Threat in what way? How would the system evaluate information in terms of threat and danger? I think this is maybe what you are hinting at David?
More than a weekends work indeed.
Yes Tim exactly and simply put “One mans meat is another mans poison” or maybe we should say “One humans good thoughts is another’s nightmare”. We can never replicate the uniqueness of each individual …..by the way I love your mind x
I have been pondering your challenge on the long flight from Dallas to Adelaide, aided by a few shiraz’s.
My conclusion is that our “pain system” is awesome; we should be proud of it and it could perhaps only be improved if we had better detection systems for slow growing tumours and carbon monoxide although such a rejig would not necessarily have to include pain (better olfaction may do the trick). One brilliant piece of the pain system is its back up – the support from a host of other systems – motor, endocrine, sensory, cognitive, emotional, respiratory etc. which means that pain may not have to be produced – other systems can do the trick.
The societies which house the threats which engage our pain systems may well be up for adjustment.
Hope they were Barossa shiraz’s.
Say you could start from scratch, design the thing any way you wanted – would you change anything at all? Put another way, what if you were tasked with developing the protective systems for an advanced humanoid robot – would we still want pain to be unpleasant? Could a system be designed where pain stopped as soon as any damage was repaired?
Until the back up becomes maladaptive then we are in the fertiliser ……
I have a few considerations for such a system. First, if we were to design an evolutionary system we will need to understand the difference between the phenomenonal experience and the expressive experience. To me these represent two correlated responses that we may need to tease out if we are going to make evolutionary arguments.
So if we break things down this way we can look at animal behavior and ask: “Is it requisite to express pain if one feels the phenomenon of pain?” I think the answer to this question is probably no in an evolutionary sense. Being that their are plenty of animals that can experience and existential threat and react accordingly without an overt expression of pain. Thus we can make the conclusion that man’s ability to express pain is sufficiently unique. Before we get further into this I want to state that the phenomenon of pain would be/is impossible to compare across domains of animals and therefore we can only infer that animals have some primitive ability to infer threat and react. That aside I want to focus more on the expression of pain.
This is probably an evolved capacity that developed on top of either a phenomonal experience of pain or nonconscious behavioral protective patterns (if you want to deny animals and insects any sort of phenomological capacity). So the question is what domain does this sub-capacity fit underneath. In adaptive arguments it is important to think of the ultimate domain of effect: reproduction. How does pain expression affect reproduction?
IMO, there is probably a class of sub-domains such as cooperative inter-group coordination (a behavior typical of our species). So then the question is how does pain expression affect cooperative inter-group coordination?
I think this can be answered a variety of ways. From groups identifying group level threats to assessing group capacity to accomplish a group level goal (raid another group, move locations, etc). It is also safe to acknowledge at this point the adaptive argument opens the door for group specialists to enter the picture to help maintain group function despite members becoming injured or ill. These specialists known as healers and shaman most could help contain social memes of illness by allowing for the disclosure of symptoms but not the propagation of the message (of distress). In other words preventing individual level distress from becoming group level distress and incapacitating the group.
As you can see I have worked backwards from the group capacity to the individual in order to specifically analyze certain aspects of pain behaviors that a prosthetic pain device might not be able to help with.
Focusing on the individual I think there are some very important limitations that would create severe problems in developing such an artificial system. First, the system that evolved in humans and other animals was designed to handle specific threats to the environment. That these systems are not as capricious as we would like to believe. Therefore for someone to design such a system it would need a body. That this body’s relationship with the environment would determine what threats it is likely to respond to and what are the resources necessary to mount a response.
At this point it is important to differentiate between a content free design vs. content specific architecture of the nervous system. Some of cognitive psychology suffers the delusion that our cognitive capacity is built on content free mechanisms and are largely learned. Yet, I think the embodied cognition movement is making a pretty solid case with evolutionary psychologists that our psychological nervous system capacity is built on content specific mechanisms that is modified with learning.
A counter argument to the point I just made would be that we can artifically recreate visual input through tactile input. This has been shown to be possible. However, this may not be the case with all senses due to their unique and functionally disconnected architecture of inputs to outputs. For example, is it possible to recreate visual input through olfaction, gustatory or auditory stimulus? Maybe for the last (audition) but I would see it as a large metholodigical challege for the first two. Thus simply applying augmented tactile input or electrically induced stimulus to a physically and temporally distal site may not be interacting with the nervous system in the content specific way it was designed to in order to respond to threat (real or percived).
Further I think one is challenged if they are going outside of traditional nocioceptive channels of input because these probably perform very important nonconscious regulation and posture and movement and regualtes, presures and body positions in ways that avoid painful outputs by way of generating a shift of body position before nocioceptive input mounts and a pain percept is triggered.
So to summarize the three requisites for a artifical pain system you would need the following:
1. A body (or a collection of unified behavioral routines coordinated in a physical being)
2. Enviromental aspects that constitute internal or external existenatial threats to that organism.
3. A method of communicating those enviromental threats to the behavioral coordinating routines that subsume the body. In the case of a human and an electronic device. I would hypothesize that this would require interaction with content specific channels and processing centers.
Maybe that’s a start. A very intellecutally stimulating topic thanks for the heads up.
Are the architects of the “man made ” systems going to address the question of the, incalculable and invisible component…….. Spirituality of the individual being ?