The Gift That Keeps on Giving…

Giving problems that is. Before I start, I notice it’s very nearly a year since my last blog! Doesn’t time fly! I blame Covid and starting a new job which has kept me busy.

Part of the reason I am writing this now is that I stupidly locked myself out of my work account late on Friday so have had no work distractions over the weekend. Instead I decided to fix a long-standing problem with my QNAP.

Yes, the gift that keeps on giving problems.

To be fair, it’s not all the QNAP’s fault. I’ve tried putting it in various rooms of the house (even empty ones) but somehow, it keeps getting turned off at the mains by “other people” plugging in vacuum cleaners or the like. This tends to wreak havoc with the 2TB ext4 filesystem and sometimes the RAID array too.

This shouldn’t happen but it reminds me of the early days of my career in the early 90s when exactly this thing happened at a company I worked for. They had a SCO Unix PC on a desk in an office in London. It kept going wrong and they kept complaining. After a few trips down to London to repeatedly fix it, we realised that the cleaners were unplugging it. Naturally they hadn’t thought of a cabinet, server cupboard or even a UPS or a note on the plug. IT was a bit of an inconvenience for them.

Anyway, 30 years later and it’s still happening albeit in a domestic context. The background is that whenever I would log on to my QNAP the filesystem would always be corrupt. Sometimes because it had been unplugged but sometimes for no reason whatsoever. To fix this I came up with a process which involved plugging in a console and keyboard, shutting down the services, un-mounting the filesystem, running e2fsck and starting everything again.

After a while however, it became apparent that running e2fsck, even when forced, didn’t fix all the problems. Straight after a filesystem fix I still got messages like the one below:

Suspicious ext4 filesystem errors on QNAP

Now, I don’t know what the errors above mean but e2fsck never fixed them and the filesystem continued to get corrupted. Luckily I didn’t lose any data.

Anyway, this weekend I finally bit the bullet, after copying all the old content to an external drive and all the active content to Onedrive. I took the advice of various websites and decided to use the QNAP as a backup store rather than a primary one.

This involved, deleting the volume completely and re-generating it as a new RAID volume and filesystem. Strangely, after the original volume had been deleted, there was no option to create a RAID5. So I created a RAID10 instead. I deleted that because I wanted more space and the next time the RAID5 option was available.

It’s all just a bit “random” really. I do still like the QNAP but if it continues to play up after this major surgery, I will have to write another blog.

1990’s Technology

I reckon the record for the longest downtime for a computer must belong to the difference engine. From about the middle of the 19th century until the Science Museum re-built it, that’s about 150 years of downtime. My own record is a bit more modest, but more real since it is physically the same machine.

This is my Commodore Amiga, last powered on in the 1990s. Complete with 500MB GVP hard drive and Naksha mouse. As you can see it still works (although it took a few goes to get it to boot) and is rather noisy. I can’t remember how to use it, although it has a version of emacs, tex, some letters and a dial up modem connection to Demon internet and a few games I can’t remember how to play. It’s main use was Sensible Soccer so I will need to find the disks for that!

Incidentally, to close off the last post, my VPN tunnel worked!

OpenVPN

Ever since I had my first QNAP (a TS-219 which I think came out in 2010) I’ve liked QNAPs. Apart from the odd booting problem and the frequent updates it’s been perfect, having had no hardware problems and ever increasing functions and apps.

One of those apps is a VPN server and it’s always been too fiddly to get working – until now. Currently I have an SS-439 and the software now seems mature enough to work with the minimum of fuss.

I enabled the OpenVPN application, downloaded the certificate, set up the user etc. and downloaded the OpenVPN client for Windows 7 (yeah need a new laptop veeeery soon). Having done the client side config, I tried this on the office Wifi. No luck. Wouldn’t connect at all.

First debugging step was to try it on the inside of my network (with a brief interlude to upgrade Wireshark). Same result. That took me to looking at my BT Smarthub 2 firewall. Looks like no blockers there but I decided to add a port forwarding rule just in case.

This did *something* because the behaviour of the client changed and having turned on logging for VPN on the Qnap, I could see user failed logins which corresponded to the message on the client.

I verified the username and password and then noticed that the username was case sensitive – an oldie but a goodie.

Having fixed that I connected fine – from the inside. Tomorrow the real test will be to see if I can get to it from the outside! I am hopeful! If it works, I might have a working VPN option I can use if I’m ever in China again. I spent ages trying to get a VPN working the last time I was there without success.

Hitting the off-switch problem

No, no, I’m not hitting the off switch, although you could be forgiven for thinking so given my frequency of blogs. This is in response to a recent lecture by Stuart Russell I attended at the 2019 DX Expo.

In this interesting talk, one of the topics was the off-switch problem, described on Wikipedia and no doubt in his latest book. This problem can be summarised as follows:

“A robot with a fixed objective has an incentive to disable it’s own off-switch.”

This is about who/what has control. Are humans able to turn it off if the objective does not align with ours?

The theory goes that you give the robot a positive incentive to turn itself off in situations where it determines the outcome of it’s actions are uncertain.

I have two problems with this theory.

The first is that a physical, acting in the real world “robot” is equated with the AI. This can be misleading. Robots and AI are two different concepts. It’s true of course that some or most robots will run AI s/w but it’s not true that all AI needs direct control of a physical actor to achieve it’s goals. That can be done by manipulation of data and “human engineering”. The physical presence is almost irrelevant. The problem we are trying to solve is one of control. And there’s no off switch for the internet.

That leads me to the second objection. Implementing an algorithm which means the robot turns itself off is just moving the probleml from the physical switch to the controlling algorithm. It is assumed we control the algorithm and the code. The real danger in losing control of AI is when AI s/w becomes intelligent enough to write itself. All the theory does is move the problem. Arguably to a more difficult space to solve. At best, probabilistic programming is a short term solution which only lasts as long as we control the code.

Internet of Thing

I joined the internet of things! Well, not me personally but I bought a data logger which uploads it’s data to easylogcloud.

It’s a humidity / temperate logger which I have installed next to the piano. Pianos don’t like humidity, or to be more precise, changes in humidity so I am keeping any eye on it to determine if I need a damppchaser.

Some Good News

Actually, the day after I wrote the last post, I managed to fix my old bricked phone. It certainly helps if you read the instructions properly. That’s what we call a picnic error (problem in chair, not in computer).

So, I can use the tool now but I haven’t yet enough courage to try newer firmware on my newer phone.

Also, I took the GCP Associate Cloud Engineer Exam on Friday and passed (at least provisionally). They send you a confirmation in about a week and I guess it would be unusual and somewhat unfair if they changed a provisional pass in to a fail!

The exam itself is 50 multiple choice questions and if you don’t know the answer 100% then it’s relatively easy to eliminate obviously wrong answers and make a good stab at the remaining 2 by carefully reading the question. In fact, careful reading of the question if the most important thing you can do – as important as revising!

My main learning resource was Linux Academy. The lectures and labs were good and their practice exam excellent preparation.

…So it’s a week later and I forgot to post this. On the plus side I can confirm Google confirmed my exam result!

Old phone – new problem

Whilst I wait for another Android firmware version to download, I can pass the time by writing this.

I dipped my toe into the mysterious world of firmware upgrades on Samsung phones and have managed to brick (technical term) my test phone. This all came about because I had been using a very old Galaxy SM-J320FN hand-me down as a back-up to my iPhone. It was useful as an alternative to IOS but 8G was a bit limiting so I decided to upgrade (cheaply and after a lot of research) by buying a second hand J7 on eBay for about £90. Seems like a good deal when a new S10 costs about £900.

It’s a nice phone, the SM-G610F, with Dial SIM and 32G and a micro-SD slot. However it came with United Arab Emirates firmware and Android 6.0.1 and an older kernel and security patch level than the old J3.

So, how hard can it be, I thought, to upgrade the software? Quite hard as it turns out. I’ve discovered that Samsung phones tend to suffer from “snowflake syndrome” – no two the same. Not only do Samsung make and sell phones for certain regions, or in some cases countries, they also make phones for specific providers. Firmware is very specific to the make and country and whilst my software information with UAE firmware says it is a SM-G610F, the back of the phone describes itself as a SM-G610Y/DS. Impossible to find new firmware for.

Now, I’m not daft enough to risk bricking my new phone but I am daft enough to try upgrading the J3, just for practice you understand. This took me down a route of installing Smart Switch, Odin and Kies. There is firmware for the J3 on Sammobile so I started by downloading that using Odin3 to update the phone. It got to the last step before flashing “Fail”, a situation which has been repeated with the 3 older versions of firmware I have tried.

The phone itself tells me “Firmware upgrade encountered an issue. Please select recovery mode in Kies and try again.”. Unfortunately, Kies 3 dos not recognise the phone. In emergency recovery, it does not appear in the list and attempting to use the initialisation function results in “SM-J320FN does not support initialising”.

I see my 2017 firmware has finished downloading … let me try that…

It failed. On “hidden.img” again.

BT Infinity Upgrade!

It’s only fair to say that the upgrade to the latest BT Infinity 2 packages was pretty smooth. The service has been great since November 2017 when I switched from Virgin. Tempted by the recent advertising, I had a look to see what my options were. I was particularly interested in the wifi disc boosters as parts of the house don’t have a good signal (according to the people that lie in their beds there).

I was already on an unlimited fibre package and all the options just seemed to give the same raw upload and download speed but I thought the prospect of the latest hub and a wifi disc was worth paying a few pounds a month for. So having, placed the order, the hub and disc (only one) arrived next day.

Setting up the new hub was easier than I anticipated, just plugged it in and turned it on and it worked. The only changes I made were to the wifi name and password and the admin password. Keeping the wifi name and password the same means all the existing devices are unaware of the change. You can even send your old hub back, pre-paid (and I factory reset it first).

The disc was a bit tricker…it took several goes to pair it to the hub. Maybe I had a cable issue as it did not seem to work on one of my cables but eventually did using the cable that came with the hub. It took longer than I thought it would and much staring at what the various flashing colours mean. At one point I suspected I should have paired it before changing the admin password but you can’t change it back, at least not without a factory reset. The s/w complains if you try. That wasn’t the problem though, as it worked eventually.

In summary, it seems to have fixed the weak wifi signals and as a bonus you can even use the ethernet port on the disc to connect truculent machines like this Centos one which I never could get the wifi dongle to work on.

Below – some play, not all work!

Imagine my delight to discover Capybara Games has produced “Below”, the 21st century version of the text based dungeon games like Rogue, Larn, Hack and Nethack I played at Uni in the 80s. I still play from time to time using an Ubuntu VM on my laptop, my current game’s level 7 looks like this:

I liked this game so much back in the day, I wrote a dungeon generator in 6502 machine code for the BBC Micro. Got stuck at that point as I had used up all the memory. In fact, I think I have some original Rogue or Larn source code on a reel of tape in the attic. Legend has it, Ken Arnold wrote Rogue to help debug his Unix curses package. The original paper is still available here.

Thought Metric

I just had a thought. I was reading chapter 2 of Nick Bostrom’s “Superintelligence” (a bit out of date now but feels like a classic). He has been describing various ways an AI might be created (modelling evolution, modelling the brain, self-improving programs etc. but finishing the section saying how AIs would not much resemble the human mind and that their cognitive abilities and goals would be alien. This follows the idea about AIs being able to improve themselves, for which there obviously needs to be a metric about what improvement means.

My thought is about thinking; a meta thought perhaps. What are the units of thinking? Seems like it might be a useful unit to have when comparing cognitive abilities. What is the total number of these units that the human race has produced over the length of our history? Is the computing metric of MFLOPS a useful starting place or just an indication of the alien nature of computers, i.e. not useful to us. I think the latter.

If we just say, for the sake of argument, the unit is ideas per lifetime. Individuals will vary widely with great minds producing tens of thousands or maybe millions of ideas (without any qualitative limitation on “idea”). There will be an average which we can use to multiply by the number of people that have ever lived to generate a number which is the total “idea output” of the human race.

So how might an AI compare; specifically, how long would an AI take to have all the ideas that the human race has had, or would it even be capable of having the same ideas as us?

Is this a potential reason for an AI to keep us around? If an AI values ideas, and it probably would, do we collectively provide an “idea resource” that an AI can use, particularly if we are capable of producing thoughts and ideas that an AI can’t, because of it’s nature.

Perhaps we can take some comfort from the fact that we have amassed a huge idea bank, which an AI might not be able to reproduce via it’s own thinking, either because of capacity or cognitive architecture. It might *absorb* all the ideas we have produced, but having a level of general intelligence, unless it were able to reproduce the ideas itself, it would recognise that we have a unique ability.

Comforting, I think.

Chalk up a couple more ideas for the human race, even if someone else has had the idea already!