About Duncan Baillie

Technologist. Product of the ZX81 generation.

Never use an Amazon Locker :-(

Update on 2nd May…Amazon have replied to my support request, which is good even if this blog had nothing to do with it. The have apologised, refunded and guaranteed it will never happen again – I’m not sure how they are going to do that – but I’m not going to choose a locker as a delivery mechanism for a long time.

A rather mundane post but I’m using it to get some anger out of my system.

I ordered Beta Humans and decided to try the Amazon locker near me as a delivery method. Free after all.

I received a message yesterday that the package was ready for collection and duly went to the locker with the app. The app connected to the locker and I hit collect item. Nothing happened. I pressed “help” and got the message “we’re sorry, the locker door seems to be broken”.

I came home and tried to contact Amazon. Impossible to do. There is no help available for faulty lockers. The chatbot has no options for this and just sends you round in circles about returning an item.

The call me back option just does nothing.

I emailed cis@amazon.co.uk but no reply.

Fast forward today and I get a message to thank me for picking up the parcel. Clearly someone else has hacked/stolen the item.

Just now I convinced the robot to send me a refund but this is a piss-poor service from Amazon a) because the lockers are clearly not reliable and b) there is no way to get support and c) they are not secure.

I will not be buying anything from Amazon until this issue is resolved.

Kurt Vonnegut’s view of AI

I re-read “The Sirens of Titan” by Kurt Vonnegut, a book I have read a few times but I always find new angles in it. Kurt Vonnegut is a writer who likes to ask the Big Questions.

In this case I thought the following passage could be viewed as a pretty accurate comment on AI:

“Once upon a time on Tralfamadore there were creatures who weren’t anything like machines. They weren’t dependable. They weren’t efficient. They weren’t predictable. They weren’t durable. And these poor creatures were obsessed by the idea that everything that existed had to have a purpose, and that some purposes were higher than others.
These creatures spent most of their time trying to find out what their purpose was. And every time they found out what seemed to be a purpose of themselves, the purpose seemed so low that the creatures were filled with disgust and shame.
And, rather than serve such a low purpose, the creatures would make a machine to serve it. This left the creatures free to serve higher purposes. But whenever they found a higher purpose, the purpose still wasn’t high enough.
So machines were made to serve higher purposes too.
And the machines did everything so expertly that they were finally given the job of finding out what the highest purpose of the creatures could be.
The machines reported in all honesty that the creatures couldn’t really be said to have any purpose at all.
The creatures thereupon began slaying each other, because they hated purposeless things above all else.
And they discovered that they weren’t even very good at slaying. So they turned that job over to the machines, too. And the machines finished up the job in less time than it takes to say “Tralfamadore”.

I guess that pretty much sum’s up this, and other books by the same author. It is typically a very dark opinion wrapped in a light-hearted way. In this case a chilling summary of many opinions on the future of AI. As such it shares a common theme with Rossum’s Universal Robots by Carel Kapek which arguably started the genre.

QNAP The Saga Continues…

So. After the remedial work below, I fully expected the problem to be fixed. But no! On logging in the next day I see that the QNAP lost power at 3:09 in the morning. It reset, re-synchronised the RAID array and reported the filesystem corrupt. There was certainly no-one unplugging it at that time of the morning unless we have a secret midnight cleaner.

A quick piece of online searching reveals this to be a common problem with some QNAPs and some versions of firmware. Mine is an SS-439 on 4.2.6. I don’t care what models and firmware they say, it looks like the PSU just gives up under certain conditions. This is bad because it means faulty hardware and it’s hard to do anything about that. But, more research needed.

P.S. I switched the PSU for the one from my old QNAP a couple of days after the above.

The Gift That Keeps on Giving…

Giving problems that is. Before I start, I notice it’s very nearly a year since my last blog! Doesn’t time fly! I blame Covid and starting a new job which has kept me busy.

Part of the reason I am writing this now is that I stupidly locked myself out of my work account late on Friday so have had no work distractions over the weekend. Instead I decided to fix a long-standing problem with my QNAP.

Yes, the gift that keeps on giving problems.

To be fair, it’s not all the QNAP’s fault. I’ve tried putting it in various rooms of the house (even empty ones) but somehow, it keeps getting turned off at the mains by “other people” plugging in vacuum cleaners or the like. This tends to wreak havoc with the 2TB ext4 filesystem and sometimes the RAID array too.

This shouldn’t happen but it reminds me of the early days of my career in the early 90s when exactly this thing happened at a company I worked for. They had a SCO Unix PC on a desk in an office in London. It kept going wrong and they kept complaining. After a few trips down to London to repeatedly fix it, we realised that the cleaners were unplugging it. Naturally they hadn’t thought of a cabinet, server cupboard or even a UPS or a note on the plug. IT was a bit of an inconvenience for them.

Anyway, 30 years later and it’s still happening albeit in a domestic context. The background is that whenever I would log on to my QNAP the filesystem would always be corrupt. Sometimes because it had been unplugged but sometimes for no reason whatsoever. To fix this I came up with a process which involved plugging in a console and keyboard, shutting down the services, un-mounting the filesystem, running e2fsck and starting everything again.

After a while however, it became apparent that running e2fsck, even when forced, didn’t fix all the problems. Straight after a filesystem fix I still got messages like the one below:

Suspicious ext4 filesystem errors on QNAP

Now, I don’t know what the errors above mean but e2fsck never fixed them and the filesystem continued to get corrupted. Luckily I didn’t lose any data.

Anyway, this weekend I finally bit the bullet, after copying all the old content to an external drive and all the active content to Onedrive. I took the advice of various websites and decided to use the QNAP as a backup store rather than a primary one.

This involved, deleting the volume completely and re-generating it as a new RAID volume and filesystem. Strangely, after the original volume had been deleted, there was no option to create a RAID5. So I created a RAID10 instead. I deleted that because I wanted more space and the next time the RAID5 option was available.

It’s all just a bit “random” really. I do still like the QNAP but if it continues to play up after this major surgery, I will have to write another blog.

1990’s Technology

I reckon the record for the longest downtime for a computer must belong to the difference engine. From about the middle of the 19th century until the Science Museum re-built it, that’s about 150 years of downtime. My own record is a bit more modest, but more real since it is physically the same machine.

This is my Commodore Amiga, last powered on in the 1990s. Complete with 500MB GVP hard drive and Naksha mouse. As you can see it still works (although it took a few goes to get it to boot) and is rather noisy. I can’t remember how to use it, although it has a version of emacs, tex, some letters and a dial up modem connection to Demon internet and a few games I can’t remember how to play. It’s main use was Sensible Soccer so I will need to find the disks for that!

Incidentally, to close off the last post, my VPN tunnel worked!

OpenVPN

Ever since I had my first QNAP (a TS-219 which I think came out in 2010) I’ve liked QNAPs. Apart from the odd booting problem and the frequent updates it’s been perfect, having had no hardware problems and ever increasing functions and apps.

One of those apps is a VPN server and it’s always been too fiddly to get working – until now. Currently I have an SS-439 and the software now seems mature enough to work with the minimum of fuss.

I enabled the OpenVPN application, downloaded the certificate, set up the user etc. and downloaded the OpenVPN client for Windows 7 (yeah need a new laptop veeeery soon). Having done the client side config, I tried this on the office Wifi. No luck. Wouldn’t connect at all.

First debugging step was to try it on the inside of my network (with a brief interlude to upgrade Wireshark). Same result. That took me to looking at my BT Smarthub 2 firewall. Looks like no blockers there but I decided to add a port forwarding rule just in case.

This did *something* because the behaviour of the client changed and having turned on logging for VPN on the Qnap, I could see user failed logins which corresponded to the message on the client.

I verified the username and password and then noticed that the username was case sensitive – an oldie but a goodie.

Having fixed that I connected fine – from the inside. Tomorrow the real test will be to see if I can get to it from the outside! I am hopeful! If it works, I might have a working VPN option I can use if I’m ever in China again. I spent ages trying to get a VPN working the last time I was there without success.

Hitting the off-switch problem

No, no, I’m not hitting the off switch, although you could be forgiven for thinking so given my frequency of blogs. This is in response to a recent lecture by Stuart Russell I attended at the 2019 DX Expo.

In this interesting talk, one of the topics was the off-switch problem, described on Wikipedia and no doubt in his latest book. This problem can be summarised as follows:

“A robot with a fixed objective has an incentive to disable it’s own off-switch.”

This is about who/what has control. Are humans able to turn it off if the objective does not align with ours?

The theory goes that you give the robot a positive incentive to turn itself off in situations where it determines the outcome of it’s actions are uncertain.

I have two problems with this theory.

The first is that a physical, acting in the real world “robot” is equated with the AI. This can be misleading. Robots and AI are two different concepts. It’s true of course that some or most robots will run AI s/w but it’s not true that all AI needs direct control of a physical actor to achieve it’s goals. That can be done by manipulation of data and “human engineering”. The physical presence is almost irrelevant. The problem we are trying to solve is one of control. And there’s no off switch for the internet.

That leads me to the second objection. Implementing an algorithm which means the robot turns itself off is just moving the probleml from the physical switch to the controlling algorithm. It is assumed we control the algorithm and the code. The real danger in losing control of AI is when AI s/w becomes intelligent enough to write itself. All the theory does is move the problem. Arguably to a more difficult space to solve. At best, probabilistic programming is a short term solution which only lasts as long as we control the code.

Internet of Thing

I joined the internet of things! Well, not me personally but I bought a data logger which uploads it’s data to easylogcloud.

It’s a humidity / temperate logger which I have installed next to the piano. Pianos don’t like humidity, or to be more precise, changes in humidity so I am keeping any eye on it to determine if I need a damppchaser.

Some Good News

Actually, the day after I wrote the last post, I managed to fix my old bricked phone. It certainly helps if you read the instructions properly. That’s what we call a picnic error (problem in chair, not in computer).

So, I can use the tool now but I haven’t yet enough courage to try newer firmware on my newer phone.

Also, I took the GCP Associate Cloud Engineer Exam on Friday and passed (at least provisionally). They send you a confirmation in about a week and I guess it would be unusual and somewhat unfair if they changed a provisional pass in to a fail!

The exam itself is 50 multiple choice questions and if you don’t know the answer 100% then it’s relatively easy to eliminate obviously wrong answers and make a good stab at the remaining 2 by carefully reading the question. In fact, careful reading of the question if the most important thing you can do – as important as revising!

My main learning resource was Linux Academy. The lectures and labs were good and their practice exam excellent preparation.

…So it’s a week later and I forgot to post this. On the plus side I can confirm Google confirmed my exam result!

Old phone – new problem

Whilst I wait for another Android firmware version to download, I can pass the time by writing this.

I dipped my toe into the mysterious world of firmware upgrades on Samsung phones and have managed to brick (technical term) my test phone. This all came about because I had been using a very old Galaxy SM-J320FN hand-me down as a back-up to my iPhone. It was useful as an alternative to IOS but 8G was a bit limiting so I decided to upgrade (cheaply and after a lot of research) by buying a second hand J7 on eBay for about £90. Seems like a good deal when a new S10 costs about £900.

It’s a nice phone, the SM-G610F, with Dial SIM and 32G and a micro-SD slot. However it came with United Arab Emirates firmware and Android 6.0.1 and an older kernel and security patch level than the old J3.

So, how hard can it be, I thought, to upgrade the software? Quite hard as it turns out. I’ve discovered that Samsung phones tend to suffer from “snowflake syndrome” – no two the same. Not only do Samsung make and sell phones for certain regions, or in some cases countries, they also make phones for specific providers. Firmware is very specific to the make and country and whilst my software information with UAE firmware says it is a SM-G610F, the back of the phone describes itself as a SM-G610Y/DS. Impossible to find new firmware for.

Now, I’m not daft enough to risk bricking my new phone but I am daft enough to try upgrading the J3, just for practice you understand. This took me down a route of installing Smart Switch, Odin and Kies. There is firmware for the J3 on Sammobile so I started by downloading that using Odin3 to update the phone. It got to the last step before flashing “Fail”, a situation which has been repeated with the 3 older versions of firmware I have tried.

The phone itself tells me “Firmware upgrade encountered an issue. Please select recovery mode in Kies and try again.”. Unfortunately, Kies 3 dos not recognise the phone. In emergency recovery, it does not appear in the list and attempting to use the initialisation function results in “SM-J320FN does not support initialising”.

I see my 2017 firmware has finished downloading … let me try that…

It failed. On “hidden.img” again.