Internet of Thing

I joined the internet of things! Well, not me personally but I bought a data logger which uploads it’s data to easylogcloud.

It’s a humidity / temperate logger which I have installed next to the piano. Pianos don’t like humidity, or to be more precise, changes in humidity so I am keeping any eye on it to determine if I need a damppchaser.

Some Good News

Actually, the day after I wrote the last post, I managed to fix my old bricked phone. It certainly helps if you read the instructions properly. That’s what we call a picnic error (problem in chair, not in computer).

So, I can use the tool now but I haven’t yet enough courage to try newer firmware on my newer phone.

Also, I took the GCP Associate Cloud Engineer Exam on Friday and passed (at least provisionally). They send you a confirmation in about a week and I guess it would be unusual and somewhat unfair if they changed a provisional pass in to a fail!

The exam itself is 50 multiple choice questions and if you don’t know the answer 100% then it’s relatively easy to eliminate obviously wrong answers and make a good stab at the remaining 2 by carefully reading the question. In fact, careful reading of the question if the most important thing you can do – as important as revising!

My main learning resource was Linux Academy. The lectures and labs were good and their practice exam excellent preparation.

…So it’s a week later and I forgot to post this. On the plus side I can confirm Google confirmed my exam result!

Old phone – new problem

Whilst I wait for another Android firmware version to download, I can pass the time by writing this.

I dipped my toe into the mysterious world of firmware upgrades on Samsung phones and have managed to brick (technical term) my test phone. This all came about because I had been using a very old Galaxy SM-J320FN hand-me down as a back-up to my iPhone. It was useful as an alternative to IOS but 8G was a bit limiting so I decided to upgrade (cheaply and after a lot of research) by buying a second hand J7 on eBay for about £90. Seems like a good deal when a new S10 costs about £900.

It’s a nice phone, the SM-G610F, with Dial SIM and 32G and a micro-SD slot. However it came with United Arab Emirates firmware and Android 6.0.1 and an older kernel and security patch level than the old J3.

So, how hard can it be, I thought, to upgrade the software? Quite hard as it turns out. I’ve discovered that Samsung phones tend to suffer from “snowflake syndrome” – no two the same. Not only do Samsung make and sell phones for certain regions, or in some cases countries, they also make phones for specific providers. Firmware is very specific to the make and country and whilst my software information with UAE firmware says it is a SM-G610F, the back of the phone describes itself as a SM-G610Y/DS. Impossible to find new firmware for.

Now, I’m not daft enough to risk bricking my new phone but I am daft enough to try upgrading the J3, just for practice you understand. This took me down a route of installing Smart Switch, Odin and Kies. There is firmware for the J3 on Sammobile so I started by downloading that using Odin3 to update the phone. It got to the last step before flashing “Fail”, a situation which has been repeated with the 3 older versions of firmware I have tried.

The phone itself tells me “Firmware upgrade encountered an issue. Please select recovery mode in Kies and try again.”. Unfortunately, Kies 3 dos not recognise the phone. In emergency recovery, it does not appear in the list and attempting to use the initialisation function results in “SM-J320FN does not support initialising”.

I see my 2017 firmware has finished downloading … let me try that…

It failed. On “hidden.img” again.

BT Infinity Upgrade!

It’s only fair to say that the upgrade to the latest BT Infinity 2 packages was pretty smooth. The service has been great since November 2017 when I switched from Virgin. Tempted by the recent advertising, I had a look to see what my options were. I was particularly interested in the wifi disc boosters as parts of the house don’t have a good signal (according to the people that lie in their beds there).

I was already on an unlimited fibre package and all the options just seemed to give the same raw upload and download speed but I thought the prospect of the latest hub and a wifi disc was worth paying a few pounds a month for. So having, placed the order, the hub and disc (only one) arrived next day.

Setting up the new hub was easier than I anticipated, just plugged it in and turned it on and it worked. The only changes I made were to the wifi name and password and the admin password. Keeping the wifi name and password the same means all the existing devices are unaware of the change. You can even send your old hub back, pre-paid (and I factory reset it first).

The disc was a bit tricker…it took several goes to pair it to the hub. Maybe I had a cable issue as it did not seem to work on one of my cables but eventually did using the cable that came with the hub. It took longer than I thought it would and much staring at what the various flashing colours mean. At one point I suspected I should have paired it before changing the admin password but you can’t change it back, at least not without a factory reset. The s/w complains if you try. That wasn’t the problem though, as it worked eventually.

In summary, it seems to have fixed the weak wifi signals and as a bonus you can even use the ethernet port on the disc to connect truculent machines like this Centos one which I never could get the wifi dongle to work on.

Below – some play, not all work!

Imagine my delight to discover Capybara Games has produced “Below”, the 21st century version of the text based dungeon games like Rogue, Larn, Hack and Nethack I played at Uni in the 80s. I still play from time to time using an Ubuntu VM on my laptop, my current game’s level 7 looks like this:

I liked this game so much back in the day, I wrote a dungeon generator in 6502 machine code for the BBC Micro. Got stuck at that point as I had used up all the memory. In fact, I think I have some original Rogue or Larn source code on a reel of tape in the attic. Legend has it, Ken Arnold wrote Rogue to help debug his Unix curses package. The original paper is still available here.

Thought Metric

I just had a thought. I was reading chapter 2 of Nick Bostrom’s “Superintelligence” (a bit out of date now but feels like a classic). He has been describing various ways an AI might be created (modelling evolution, modelling the brain, self-improving programs etc. but finishing the section saying how AIs would not much resemble the human mind and that their cognitive abilities and goals would be alien. This follows the idea about AIs being able to improve themselves, for which there obviously needs to be a metric about what improvement means.

My thought is about thinking; a meta thought perhaps. What are the units of thinking? Seems like it might be a useful unit to have when comparing cognitive abilities. What is the total number of these units that the human race has produced over the length of our history? Is the computing metric of MFLOPS a useful starting place or just an indication of the alien nature of computers, i.e. not useful to us. I think the latter.

If we just say, for the sake of argument, the unit is ideas per lifetime. Individuals will vary widely with great minds producing tens of thousands or maybe millions of ideas (without any qualitative limitation on “idea”). There will be an average which we can use to multiply by the number of people that have ever lived to generate a number which is the total “idea output” of the human race.

So how might an AI compare; specifically, how long would an AI take to have all the ideas that the human race has had, or would it even be capable of having the same ideas as us?

Is this a potential reason for an AI to keep us around? If an AI values ideas, and it probably would, do we collectively provide an “idea resource” that an AI can use, particularly if we are capable of producing thoughts and ideas that an AI can’t, because of it’s nature.

Perhaps we can take some comfort from the fact that we have amassed a huge idea bank, which an AI might not be able to reproduce via it’s own thinking, either because of capacity or cognitive architecture. It might *absorb* all the ideas we have produced, but having a level of general intelligence, unless it were able to reproduce the ideas itself, it would recognise that we have a unique ability.

Comforting, I think.

Chalk up a couple more ideas for the human race, even if someone else has had the idea already!

Time to Make Computers Feel Pain?

I started this post in June, but you know how it is…(it’s September now)

I come back to it now because I saw a pretty bad film yesterday on Netflix (“Tau”). Mad psychopathic genius invents AI movie. It did have one thing I liked though, the ability to cause the AI pain, which reminded me of this post.

Many people may relish the prospect, given the amount of pain that people think they have received at the hands of computers. Up till now our rage has sought solace by throwing machines out of windows or smashing laptops with hammers, in fiction at least. This pain is of course entirely self inflicted but bad IT makes for a powerful feedback loop. The most important trait for IT: patience.

However, this is not the kind of revenge or hate led fury (like in the film) that I am thinking about.

These are ideas surfacing after reading in quick succession Life 3.0 by Max Tegmark, “To Be A Machine” by Mark O’Connell and “Other Minds: The Octopus and the Evolution of Intelligent Life” by Peter Godfrey-Smith. Two themes struck me from the first two of these books: the fear of an AI explosion and transhumanism, the desire to upload minds and consciousness to machines to “solve death”. These, I believe are more closely related than we might imagine.

As a general point, the majority of effort has been in the direction of organic to machine. All our technology has been augmenting our physical and mental selves. I have not seen much evidence that what technology we have is being given human attributes. To be fair I haven’t looked for any research on this. AI you may say is the obvious exception, aren’t we trying to make machine’s minds like ours?

Well yes, but we are forgetting one small thing: the body. If you know some Zen Buddhism, you may have come across “Shinshin Ichinyo” : Mind and Body as one. Again, I haven’t done any research into this but the phrase itself neatly sums up the idea.

The mind (in the sense of higher levels of consciousness that transhumanists seek to preserve) is, I believe, an epi-phenomenon brought about by the functioning of the brain. Particularly the intelligent brain. I do not think I am alone in this belief. The function of the brain is to run the body, ultimately so that the body can reproduce (be genetically successful) and life goes on.

The brain evolved to process sensory feedback from the body. As bodies have evolved more complex organs and senses, so the brain has grown to process the input. What we interpret as pain is of course a survival mechanism to move the body out of harm’s way, and as such probably the most powerful force to direct action and ultimately behaviour.

So if we want to influence the behaviour of artifical beings (I won’t go so far as to say life yet) then one way is to mimic our own evolution with a body and senses. If we are to hope that AI will have anything in common with us, then they must be able to sense the world in the way we do, with similar senses. In a wider context, we are defined by our boundaries: our lifespan, the limit of our senses and the functionality of our bodies. (For a fascinating aside on our bodies see Alice Robert’s BBC4 program on Ultra Humans). What are the boundaries that affect an AI? We should consider carefully what abilities we give them; every new “functionality” also adds limits.

Many questions arise if we are to try and direct AI evolution along a similar path to our own, in order that we have more in common: what is a program’s “sensory envelope”, what effects can sensory input have on it, is there an equivalent of “death”, are there “individual” AIs or is there ultimately, only one? The more I think about it, the more questions arise.

And then there is the question of morals. They are another layer of behaviour, but where do they come from? A survival mechanism? A result of cooperative behaviour and societies? Cooperation only comes in to play if there are numerous individuals and there might only be one AI. Unfortunately Philosophy is not a priority at this point in time so I rely on friends to send me snippets from “Philosophy Matters”, which is the Philosophy equivalent of “Dilbert”.

Compared to these complicated questions in this rather rambling post, even the meaning of life seems like an easier question…isn’t it to reduce entropy and evolve to the extend that it can overcome the death of the Universe?

Inorganic Intelligence

AI. Artificial Intelligence. We’ve been writing about it, researching it, creating it, scared of it, delighted with it for decades. However, I think the word “Intelligence”, like so many others, drifts across a large range of meanings and so I would like to record my view.

The first, and most common “mistake” is to equate intelligence with knowledge. People are often referred to as being intelligent when they can impress people with facts on a subject or range of subjects. This definition of intelligence makes the phrase artificial intelligence easy to accept because of course computers can store vast amounts of information and retrieve it quickly, giving an appearance of intelligence by this meaning.

A second meaning is to demonstrate a “thinking” skill. Playing chess, or winning at “Go” are the oft cited examples of artificial intelligence. But before we call it artificial intelligence, is it in fact intelligence? I would say not. These are mathematical algorithms or game theory.

Nevertheless, despite these underwhelming interpretations of intelligence many people fear “AI” and believe it is just around the corner; the combination of mathematical technologies broadly categorised by neural networking, big data and machine learning combined with the physical technologies offering faster and faster processing, greater storage and even quantum computing have led people to believe that our very humanity is at threat.

It is wise to assess the risks, and who knows, we may one day give rise to artificial intelligence but I don’t think we are in danger.

I think there are a number of aspects to intelligence. In no particular order:

  • problem solving
  • awareness
  • creativity
  • harmlessness

This may look like an odd list and that is because it is very “human centric”. The intelligence I am trying to describe is a human one – a set of attributes against which we can compare non-human, particularly machine (inorganic) behaviours. We can imagine “alien” behaviours, a good example is in the book “Solaris” but their very “alienness” would make it impossible for us to comprehend if it was intelligent or not. We are doomed to measure everything only against what we can comprehend.

So, back to the list.

Problem solving is about goal or purpose. I think this is fundamentally tied in with evolution and survival. Being able to solve problems (think) bestows an evolutionary advantage, not just in terms of figuring out new ways to “eat” but new ways to evade predators or attack prey. This was beautifully demonstrated in the recent Blue Planet series with the reef octopus outsmarting the shark or the fish using tools to build a nest. We can see a whole spectrum of this kind of intelligence across the animal kingdom and we often refer to dogs, dolphins and whales as clever creatures. In this dimension, they really are. [In the human case, evolution may have gone to far and made our brains so big they are a threat to the ecosystem and life itself. Kurt Vonnegut explores this idea in “Galapagos”]

Awareness is along the lines of “emotional intelligence”; a sense of empathy, what impact your words and actions are having on the internal mental “state” of another. I’m not saying that this type of awareness must be used for “good”, which takes us down a route of bringing a moral dimension to intelligence. Perhaps a broader definition could include this, but harmlessness, discussed below is as far as I will go.

Creativity – something we like to thing we do well. Producing something from nothing. New ideas or new ways to communicate ideas. To overlap with awareness, creativity is directed methods to deliberately change the internal “state” of other individuals. Humour, art, music, a search for knowledge born of curiosity are all involved.

Harmlessness is about not destroying yourself. A bit like awareness but on your physical surroundings. This can be immediate surroundings, or the Universe as a whole. It can be over any time span from minutes to aeons. We would certainly consider ourselves more intelligent if we knew what impact out actions would have in a distant future. And we would not describe anything as intelligent which acted to destroy the environment which sustained it. Given our impact on the planet, this makes humans look pretty stupid; back to “Galapagos” again.

How is “AI” in computers and software stacking up to the attributes in my list. Not very well, I would say. Of course my list does not contain AI’s strong hand of knowledge or mathematical problem solving. Perhaps I am deliberately biasing my list in favour of humanity!

Problem solving: do AIs evolve? Do they write themselves? Are they driven to live long enough to pass their genetic information on to the next generation? Does not apply.

Awareness: I have yet to see a program demonstrate empathy for another.

Creativity: Never mind the Turing test, when was the last time a computer made you laugh, deliberately, by telling a joke. Actually, maybe programs are doing that to each other all the time and we just don’t know about it. Tron?

Harmlessness: when AI becomes self sufficient then we can see if it is harmless. Long way to go.

Intelligence cannot be artificial, it’s just a set of attributes of a system be it organic or inorganic. What we really mean by AI is how closely does a system which has not arisen from Darwinian evolution compare with us. Artifical intelligence should really be called inorganic intelligence. Of course, you could also take the view that we are just a stage in the evolutionary process which adapts into inorganic intelligence. “Sometimes men build robots, sometimes robots build men. What does it matter, really, whether one thinks with metal or with protoplasm?” to give the last word to the representative of the highest possible level of development (H.P.L.D.) in Stanislaw Lem’s “The Cyberiad”, Altruizine.

Robotic Process Automation

When I started this blog in 2010 it was pretty technical and centred around the uptake of virtualisation in the IT industry. Since then virtualisation has become mainstream and as my role has changed the blog has become more of a personal technology diary.

However, there is a new technology in town, which promises to change the industry, and the world. Robotics. Or to be precise Process Automation. The first hurdle the industry faces is to get away from the image of a “Robot”, which is essentially a machine. It doesn’t do itself any favours by using the word, not to mention 90% of articles contain a picture of one.

It’s just software. And what it our icon for software? Well, that’s the problem…we don’t really have one. It’s always difficult to have images for non-tangible things. Use Google to search for software and specify “Images”. See what you get. Nothing useful I bet.

Nevertheless, process automation is crossing the chasm and I hope to blog more about it. I’ll put the problem of an icon for process automation on the back burner. There is an IEEE working group chaired by Lee Coulter. Not sure if the IEEE use logos on standards though.

Wireless Woes

My latest home tech purchase is a TP-Link 300Mbps Mini wireless N USB Adapter. This was intended to replace the cable I have to run every time I want to connect my server to the internet (my home wired network not working very well and involves lifting floorboards to fix).

Purchased from Maplin for £10 it looked ideal as it supported Linux. However… it supports Ubuntu, not Centos. And even for Ubuntu you need to *compile* the driver into the kernel! But that’s ok because that’s what we sign up to for Linux.

Centos is a problem though. Unless I am missing something, there seems to be no native driver for the RTL8192CU chipset and none in the elRepo repository I added for Centos 7.

Compiling the driver for Centos sounds like a bunch of work and I’m surprised no-one has done it already. I will have to do a bit more digging and maybe add it to the list of things to do.