Time to Make Computers Feel Pain?

I started this post in June, but you know how it is…(it’s September now)

I come back to it now because I saw a pretty bad film yesterday on Netflix (“Tau”). Mad psychopathic genius invents AI movie. It did have one thing I liked though, the ability to cause the AI pain, which reminded me of this post.

Many people may relish the prospect, given the amount of pain that people think they have received at the hands of computers. Up till now our rage has sought solace by throwing machines out of windows or smashing laptops with hammers, in fiction at least. This pain is of course entirely self inflicted but bad IT makes for a powerful feedback loop. The most important trait for IT: patience.

However, this is not the kind of revenge or hate led fury (like in the film) that I am thinking about.

These are ideas surfacing after reading in quick succession Life 3.0 by Max Tegmark, “To Be A Machine” by Mark O’Connell and “Other Minds: The Octopus and the Evolution of Intelligent Life” by Peter Godfrey-Smith. Two themes struck me from the first two of these books: the fear of an AI explosion and transhumanism, the desire to upload minds and consciousness to machines to “solve death”. These, I believe are more closely related than we might imagine.

As a general point, the majority of effort has been in the direction of organic to machine. All our technology has been augmenting our physical and mental selves. I have not seen much evidence that what technology we have is being given human attributes. To be fair I haven’t looked for any research on this. AI you may say is the obvious exception, aren’t we trying to make machine’s minds like ours?

Well yes, but we are forgetting one small thing: the body. If you know some Zen Buddhism, you may have come across “Shinshin Ichinyo” : Mind and Body as one. Again, I haven’t done any research into this but the phrase itself neatly sums up the idea.

The mind (in the sense of higher levels of consciousness that transhumanists seek to preserve) is, I believe, an epi-phenomenon brought about by the functioning of the brain. Particularly the intelligent brain. I do not think I am alone in this belief. The function of the brain is to run the body, ultimately so that the body can reproduce (be genetically successful) and life goes on.

The brain evolved to process sensory feedback from the body. As bodies have evolved more complex organs and senses, so the brain has grown to process the input. What we interpret as pain is of course a survival mechanism to move the body out of harm’s way, and as such probably the most powerful force to direct action and ultimately behaviour.

So if we want to influence the behaviour of artifical beings (I won’t go so far as to say life yet) then one way is to mimic our own evolution with a body and senses. If we are to hope that AI will have anything in common with us, then they must be able to sense the world in the way we do, with similar senses. In a wider context, we are defined by our boundaries: our lifespan, the limit of our senses and the functionality of our bodies. (For a fascinating aside on our bodies see Alice Robert’s BBC4 program on Ultra Humans). What are the boundaries that affect an AI? We should consider carefully what abilities we give them; every new “functionality” also adds limits.

Many questions arise if we are to try and direct AI evolution along a similar path to our own, in order that we have more in common: what is a program’s “sensory envelope”, what effects can sensory input have on it, is there an equivalent of “death”, are there “individual” AIs or is there ultimately, only one? The more I think about it, the more questions arise.

And then there is the question of morals. They are another layer of behaviour, but where do they come from? A survival mechanism? A result of cooperative behaviour and societies? Cooperation only comes in to play if there are numerous individuals and there might only be one AI. Unfortunately Philosophy is not a priority at this point in time so I rely on friends to send me snippets from “Philosophy Matters”, which is the Philosophy equivalent of “Dilbert”.

Compared to these complicated questions in this rather rambling post, even the meaning of life seems like an easier question…isn’t it to reduce entropy and evolve to the extend that it can overcome the death of the Universe?