Hitting the off-switch problem

No, no, I’m not hitting the off switch, although you could be forgiven for thinking so given my frequency of blogs. This is in response to a recent lecture by Stuart Russell I attended at the 2019 DX Expo.

In this interesting talk, one of the topics was the off-switch problem, described on Wikipedia and no doubt in his latest book. This problem can be summarised as follows:

“A robot with a fixed objective has an incentive to disable it’s own off-switch.”

This is about who/what has control. Are humans able to turn it off if the objective does not align with ours?

The theory goes that you give the robot a positive incentive to turn itself off in situations where it determines the outcome of it’s actions are uncertain.

I have two problems with this theory.

The first is that a physical, acting in the real world “robot” is equated with the AI. This can be misleading. Robots and AI are two different concepts. It’s true of course that some or most robots will run AI s/w but it’s not true that all AI needs direct control of a physical actor to achieve it’s goals. That can be done by manipulation of data and “human engineering”. The physical presence is almost irrelevant. The problem we are trying to solve is one of control. And there’s no off switch for the internet.

That leads me to the second objection. Implementing an algorithm which means the robot turns itself off is just moving the probleml from the physical switch to the controlling algorithm. It is assumed we control the algorithm and the code. The real danger in losing control of AI is when AI s/w becomes intelligent enough to write itself. All the theory does is move the problem. Arguably to a more difficult space to solve. At best, probabilistic programming is a short term solution which only lasts as long as we control the code.

Thought Metric

I just had a thought. I was reading chapter 2 of Nick Bostrom’s “Superintelligence” (a bit out of date now but feels like a classic). He has been describing various ways an AI might be created (modelling evolution, modelling the brain, self-improving programs etc. but finishing the section saying how AIs would not much resemble the human mind and that their cognitive abilities and goals would be alien. This follows the idea about AIs being able to improve themselves, for which there obviously needs to be a metric about what improvement means.

My thought is about thinking; a meta thought perhaps. What are the units of thinking? Seems like it might be a useful unit to have when comparing cognitive abilities. What is the total number of these units that the human race has produced over the length of our history? Is the computing metric of MFLOPS a useful starting place or just an indication of the alien nature of computers, i.e. not useful to us. I think the latter.

If we just say, for the sake of argument, the unit is ideas per lifetime. Individuals will vary widely with great minds producing tens of thousands or maybe millions of ideas (without any qualitative limitation on “idea”). There will be an average which we can use to multiply by the number of people that have ever lived to generate a number which is the total “idea output” of the human race.

So how might an AI compare; specifically, how long would an AI take to have all the ideas that the human race has had, or would it even be capable of having the same ideas as us?

Is this a potential reason for an AI to keep us around? If an AI values ideas, and it probably would, do we collectively provide an “idea resource” that an AI can use, particularly if we are capable of producing thoughts and ideas that an AI can’t, because of it’s nature.

Perhaps we can take some comfort from the fact that we have amassed a huge idea bank, which an AI might not be able to reproduce via it’s own thinking, either because of capacity or cognitive architecture. It might *absorb* all the ideas we have produced, but having a level of general intelligence, unless it were able to reproduce the ideas itself, it would recognise that we have a unique ability.

Comforting, I think.

Chalk up a couple more ideas for the human race, even if someone else has had the idea already!

Time to Make Computers Feel Pain?

I started this post in June, but you know how it is…(it’s September now)

I come back to it now because I saw a pretty bad film yesterday on Netflix (“Tau”). Mad psychopathic genius invents AI movie. It did have one thing I liked though, the ability to cause the AI pain, which reminded me of this post.

Many people may relish the prospect, given the amount of pain that people think they have received at the hands of computers. Up till now our rage has sought solace by throwing machines out of windows or smashing laptops with hammers, in fiction at least. This pain is of course entirely self inflicted but bad IT makes for a powerful feedback loop. The most important trait for IT: patience.

However, this is not the kind of revenge or hate led fury (like in the film) that I am thinking about.

These are ideas surfacing after reading in quick succession Life 3.0 by Max Tegmark, “To Be A Machine” by Mark O’Connell and “Other Minds: The Octopus and the Evolution of Intelligent Life” by Peter Godfrey-Smith. Two themes struck me from the first two of these books: the fear of an AI explosion and transhumanism, the desire to upload minds and consciousness to machines to “solve death”. These, I believe are more closely related than we might imagine.

As a general point, the majority of effort has been in the direction of organic to machine. All our technology has been augmenting our physical and mental selves. I have not seen much evidence that what technology we have is being given human attributes. To be fair I haven’t looked for any research on this. AI you may say is the obvious exception, aren’t we trying to make machine’s minds like ours?

Well yes, but we are forgetting one small thing: the body. If you know some Zen Buddhism, you may have come across “Shinshin Ichinyo” : Mind and Body as one. Again, I haven’t done any research into this but the phrase itself neatly sums up the idea.

The mind (in the sense of higher levels of consciousness that transhumanists seek to preserve) is, I believe, an epi-phenomenon brought about by the functioning of the brain. Particularly the intelligent brain. I do not think I am alone in this belief. The function of the brain is to run the body, ultimately so that the body can reproduce (be genetically successful) and life goes on.

The brain evolved to process sensory feedback from the body. As bodies have evolved more complex organs and senses, so the brain has grown to process the input. What we interpret as pain is of course a survival mechanism to move the body out of harm’s way, and as such probably the most powerful force to direct action and ultimately behaviour.

So if we want to influence the behaviour of artifical beings (I won’t go so far as to say life yet) then one way is to mimic our own evolution with a body and senses. If we are to hope that AI will have anything in common with us, then they must be able to sense the world in the way we do, with similar senses. In a wider context, we are defined by our boundaries: our lifespan, the limit of our senses and the functionality of our bodies. (For a fascinating aside on our bodies see Alice Robert’s BBC4 program on Ultra Humans). What are the boundaries that affect an AI? We should consider carefully what abilities we give them; every new “functionality” also adds limits.

Many questions arise if we are to try and direct AI evolution along a similar path to our own, in order that we have more in common: what is a program’s “sensory envelope”, what effects can sensory input have on it, is there an equivalent of “death”, are there “individual” AIs or is there ultimately, only one? The more I think about it, the more questions arise.

And then there is the question of morals. They are another layer of behaviour, but where do they come from? A survival mechanism? A result of cooperative behaviour and societies? Cooperation only comes in to play if there are numerous individuals and there might only be one AI. Unfortunately Philosophy is not a priority at this point in time so I rely on friends to send me snippets from “Philosophy Matters”, which is the Philosophy equivalent of “Dilbert”.

Compared to these complicated questions in this rather rambling post, even the meaning of life seems like an easier question…isn’t it to reduce entropy and evolve to the extend that it can overcome the death of the Universe?

Inorganic Intelligence

AI. Artificial Intelligence. We’ve been writing about it, researching it, creating it, scared of it, delighted with it for decades. However, I think the word “Intelligence”, like so many others, drifts across a large range of meanings and so I would like to record my view.

The first, and most common “mistake” is to equate intelligence with knowledge. People are often referred to as being intelligent when they can impress people with facts on a subject or range of subjects. This definition of intelligence makes the phrase artificial intelligence easy to accept because of course computers can store vast amounts of information and retrieve it quickly, giving an appearance of intelligence by this meaning.

A second meaning is to demonstrate a “thinking” skill. Playing chess, or winning at “Go” are the oft cited examples of artificial intelligence. But before we call it artificial intelligence, is it in fact intelligence? I would say not. These are mathematical algorithms or game theory.

Nevertheless, despite these underwhelming interpretations of intelligence many people fear “AI” and believe it is just around the corner; the combination of mathematical technologies broadly categorised by neural networking, big data and machine learning combined with the physical technologies offering faster and faster processing, greater storage and even quantum computing have led people to believe that our very humanity is at threat.

It is wise to assess the risks, and who knows, we may one day give rise to artificial intelligence but I don’t think we are in danger.

I think there are a number of aspects to intelligence. In no particular order:

  • problem solving
  • awareness
  • creativity
  • harmlessness

This may look like an odd list and that is because it is very “human centric”. The intelligence I am trying to describe is a human one – a set of attributes against which we can compare non-human, particularly machine (inorganic) behaviours. We can imagine “alien” behaviours, a good example is in the book “Solaris” but their very “alienness” would make it impossible for us to comprehend if it was intelligent or not. We are doomed to measure everything only against what we can comprehend.

So, back to the list.

Problem solving is about goal or purpose. I think this is fundamentally tied in with evolution and survival. Being able to solve problems (think) bestows an evolutionary advantage, not just in terms of figuring out new ways to “eat” but new ways to evade predators or attack prey. This was beautifully demonstrated in the recent Blue Planet series with the reef octopus outsmarting the shark or the fish using tools to build a nest. We can see a whole spectrum of this kind of intelligence across the animal kingdom and we often refer to dogs, dolphins and whales as clever creatures. In this dimension, they really are. [In the human case, evolution may have gone to far and made our brains so big they are a threat to the ecosystem and life itself. Kurt Vonnegut explores this idea in “Galapagos”]

Awareness is along the lines of “emotional intelligence”; a sense of empathy, what impact your words and actions are having on the internal mental “state” of another. I’m not saying that this type of awareness must be used for “good”, which takes us down a route of bringing a moral dimension to intelligence. Perhaps a broader definition could include this, but harmlessness, discussed below is as far as I will go.

Creativity – something we like to thing we do well. Producing something from nothing. New ideas or new ways to communicate ideas. To overlap with awareness, creativity is directed methods to deliberately change the internal “state” of other individuals. Humour, art, music, a search for knowledge born of curiosity are all involved.

Harmlessness is about not destroying yourself. A bit like awareness but on your physical surroundings. This can be immediate surroundings, or the Universe as a whole. It can be over any time span from minutes to aeons. We would certainly consider ourselves more intelligent if we knew what impact out actions would have in a distant future. And we would not describe anything as intelligent which acted to destroy the environment which sustained it. Given our impact on the planet, this makes humans look pretty stupid; back to “Galapagos” again.

How is “AI” in computers and software stacking up to the attributes in my list. Not very well, I would say. Of course my list does not contain AI’s strong hand of knowledge or mathematical problem solving. Perhaps I am deliberately biasing my list in favour of humanity!

Problem solving: do AIs evolve? Do they write themselves? Are they driven to live long enough to pass their genetic information on to the next generation? Does not apply.

Awareness: I have yet to see a program demonstrate empathy for another.

Creativity: Never mind the Turing test, when was the last time a computer made you laugh, deliberately, by telling a joke. Actually, maybe programs are doing that to each other all the time and we just don’t know about it. Tron?

Harmlessness: when AI becomes self sufficient then we can see if it is harmless. Long way to go.

Intelligence cannot be artificial, it’s just a set of attributes of a system be it organic or inorganic. What we really mean by AI is how closely does a system which has not arisen from Darwinian evolution compare with us. Artifical intelligence should really be called inorganic intelligence. Of course, you could also take the view that we are just a stage in the evolutionary process which adapts into inorganic intelligence. “Sometimes men build robots, sometimes robots build men. What does it matter, really, whether one thinks with metal or with protoplasm?” to give the last word to the representative of the highest possible level of development (H.P.L.D.) in Stanislaw Lem’s “The Cyberiad”, Altruizine.

Robotic Process Automation

When I started this blog in 2010 it was pretty technical and centred around the uptake of virtualisation in the IT industry. Since then virtualisation has become mainstream and as my role has changed the blog has become more of a personal technology diary.

However, there is a new technology in town, which promises to change the industry, and the world. Robotics. Or to be precise Process Automation. The first hurdle the industry faces is to get away from the image of a “Robot”, which is essentially a machine. It doesn’t do itself any favours by using the word, not to mention 90% of articles contain a picture of one.

It’s just software. And what it our icon for software? Well, that’s the problem…we don’t really have one. It’s always difficult to have images for non-tangible things. Use Google to search for software and specify “Images”. See what you get. Nothing useful I bet.

Nevertheless, process automation is crossing the chasm and I hope to blog more about it. I’ll put the problem of an icon for process automation on the back burner. There is an IEEE working group chaired by Lee Coulter. Not sure if the IEEE use logos on standards though.