Getting Technical

So I had a thought that I should make better use of what I have already, which is to say an iMac running Leopard. It has a 2.4Ghz Dual Core Intel Duo and 2G of Ram so I thought it might be an interesting academic exercise to try and build Xen on it. Not that we’re allowed to run Darwin on top of a hypervisor but should be able to learn something from building Xen at least!

First of all download and unpack Xen 4.0.0. OK, that’s easy. Look at the README file. Hmm a list of pre-requisites, of course. Downloaded the latest gnu compiler but quickly realised I was barking up the wrong tree and dug out my DVDs so I could install Xcode. (I’ve been registered as an Apple developer for a couple of years but never got around to even installing the toolset). Anyway that was easy too and a quick perusal of the Development website left me impressed.

So that has given me GCC and Make but the next on the list is Binutils. A quick google of “binutils darwin” led me to the darwinports website so I downloaded and installed darwin ports. Following the clues (not quite the exact instructions) led to an installation of binutils. Watching “port” work is also impressive. It takes all the frustration/fun out of building software. It goes and gets everything it needs, builds and installs it (after you tell it to update itself).

All went swimmingly until it fell over trying to build “gettext”. Tried it a couple of times to no avail and ended up running the failing command manually which produced masses of output and lots of warnings but did “something”. (Have to have a bit of faith sometimes). A port install of zlib worked fine and a final invocation of “port install binutils” completed OK although I was warned that “Having binutils installed will cause some other ports to fail to build. Consider uninstalling binutils.” Cute. I just installed it.

Back to Xen’s dependencies. Python, tick. Openssl, tick.  x11, tick. curses…looks like another port job.  Too late to look at that now and all the other bits will wait until the next free slot.

Good Tech/Bad Tech

Following up on my interest in “zero client” devices, I found Panologic referred to in Brian Madden’s blog. They look like great devices!

Following on from a presentation from Leostream today, I’m beginning to realise just how many Vitualisation companies there are out there: hundreds if not thousands. One slide showed a whole swathe of companies I’d never heard of. Much to learn!

On a non-virtual note, let me relate my experience with a couple of pieces of home tech. First of all a new RangeMax Netgear Wireless N router (model WNR834Bv2). I have been using for a year a Wireless G router which came “free” from Virgin when I subscribed to their cable service. Unfortunately, that device has  customised “virgin” firmware and despite following instructions I found on the internet to allow me to log in to the device (it’s a small Linux-y thing) I could not get up-to-date firmware installed. Only a problem because it did not work with any of the family’s Nintendo devices.

I was happy to get the new device up and running in a matter of minutes and it works very well with the Nintendos and other machines. Good Tech!

My wife’s iPhone newish 3GS packed up yesterday. Bad Tech. Despite being plugged in overnight and during the day on difference chargers the battery would not hold more than 4%. Most of the day it wouldn’t even turn on. Luckily I managed to coax enough life into it to back it up and take it to the Apple Store who agreed it was kippered and replaced it with a new one. On the plus side, despite the hassle, restoring all the settings onto the new device worked smoothly.

Personally, I’m not a fancy phone person. Spend enough time in front of the computer/internet already. Provided I have something that people can talk to me with and send text messages, that’s all I need.

Desktop Virtualisation Forum (cont.)

So, on to Roy Illsey of Ovum “Market Trends: Is 2010 the tipping point for Desktop Virtualisation?”. This technology is new and small. The global market for desktops is 600 million units. Virtualisation has about 1/2% of that market. But it’s not just about the technology, it’s about the process too. This is where there needs to be a change in mindset. There is a convergance in thinking between Vmware and Citrix and a growing “ecosystem”, as evidenced by this forum and others.

Last, but not least Simon Bullers CEO of RedPixie presented “Implementing best practice for desktop virtualisation”. His bullet points about how to deliver a successful desktop virtualisation project included:

  • Create a culture of teamwork. Think about whether you need a dedicated team or do as BAU. Get the Data Centre involved.
  • Create a culture of end users. Build positive PR within the organisation and an appetite for change.
  • Technology – Client. There are various types of client to consider, there are various types of application to consider. Build a demo lab.
  • Technology – Storage. Measure and optimise throughout the course of the project.
  • Technology – Platform. Server hardware, blades v racks. Type 1 or type 2 hypervisor. Blend?
  • Size for the peak users.
  • Process: spend time information gathering and planning. Decide on scheduling.
  • Agree the appetite for risk.
  • Process: difficult decisions, don’t get involved in a blame game.
  • Financials – it’s a mine field! Does it need to show an ROI? User chargebacks?

A quick executive summary: Windows 7 is the best reason to adopt so far but don’t play the funny numbers game (a reference I guess to the potential cost savings).

Many points to consider there and the forum as a whole very worthwhile. After this talk there was a panel Q&A session before lunch. I regret not being able to attend the afternoon breakout sessions where no doubt plenty of discussion took place and many thought provoking ideas developed from the morning themes.

To re-iterate what I said yesterday, I think to be truly successful, a desktop virtualisation project has to deliver 100%.  The overheads of maintaining two or more desktop platforms are going to kill any efficiencies quickly. That is why I would seek to convince the sceptics first and not start with the users who were already fans.

I was interested in the “Zero Client” device. This for me is the “ultimate” solution, or at least the most evolved of all the solutions currently in play. In some ways it is a direct descendant of a dumb terminal of the type Wyse manufactured 30 years ago. A serial line delivering ASCII characters has been replaced by the LAN or WAN delivering rich media over optimised protocols. I would put my money on these types of device to be the most successful as the field develops.

As to the next generation of device after that, I think it will naturally be led by advances in user interfaces in the field of human computer interaction. That field seems to have been quiet in recent years after the revolution in mice, graphics and workstations. Maybe that’s my cue to go and watch some more Sci-Fi movies…there must be another HCI revolution due soon?

Desktop Virtualisation Forum

This morning I attended the Desktop Virtualisation Forum in London, billed as “how to reduce costs, increase flexibility, and improve security through virtualisation” organised by Outsourced Events and the BroadGroup. Platinum sponsors were Citrix and WYSE, with AppSense, ThinPrint, Pillar Data Systems also laying out their stall.

There was a good attendance for a Monday morning which consisted of four plenary sessions. The afternoon was divided into two breakouts but unfortunately I was unable to attend those.

Marion Howard Healy as the chair introduced the speakers and started us off with a couple of statistics: there is a 24% penetration of desktop virtualisation in the market (from which I take it that 24% of companies have some desktop virtualisation) and 59% of companies say that lack of experience is a barrier to adoption.

Patrick Irwin of Citrix gave the keynote, “Making sense of desktop virtualisation”. He started with an Albert Einstein quote “Insanity is doing the same thing over and over again and expecting different results” as a segway into the traditional way of deploying desktops, an 8 step loop. His definition of the desktop as three components: OS + Apps + profile which could be decoupled using virtualisation and delivered to the user as a service is a good model but for me is missing one vital component which is data.

He explained that desktop virtualisation is not VDI but that VDI was one of the Virtual Desktop delivery options which formed a range of solutions from server side compute e.g. hosted shared desktops to client side compute e.g. a local VM based desktop built on a type 1 hypervisor.

The benefits of desktop virtualisation are agility, productivity and cost although, unlike server virtualisation it is initially cost neutral.

This excellent introduction was followed by David Angwin of WYSE, “A solution for all? The promise and reality of desktop virtualisation”. He reminded us that we still had to tackle the challenges of managing a desktop introducing the idea of an ideal client and how the promise of such a device differed from the reality. The promise comprised a direction towards the cloud delivering cost benefits in opex, capex and energy as well as business benefits in terms of security, compliance and manageability.

The reality is that (according to Gartner) the TCO of a PC is $117/month versus a PC + VDI solution of $135/month (all the expensive back end infrastructure I guess). Savings start with a Thin Client (TC) and VDI at $72/month and extend with TC + WTS ($42/month) and TC + XA ($38/month). (‘Fraid I didn’t catch the last two acronyms).

He introduced the term “Zero Client” (coined by WYSE), a device which does one thing: connects to some virtual infrastructure. It has no O/S and no disk (so is inherently secure). A thin client by contrast has a local embedded O/S.

One example I found particularly interesting was that of Hilton Hotels who have used zero clients (I believe) along with specialist software  to do away with traditional call centres and tap a rich vein of home workers from a totally different demographic to give them a “virtual call centre”.

His take-aways were to break the relationship with the tin, look at the server as well as the client, fund with refresh and identify IT pioneers.

Again data was not sufficiently addressed for me, or in the remaining talks.

The final point, of identifying IT pioneers was echoed by other speakers, particularly Simon Bullers from RedPixie. However I think you could fall into a trap here…To ultimately be successful a virtualisation project has to deliver 100%. Every single desktop in your organisation which does not follow your virtual design pattern takes away from the benefit. If you only manage to get 80% of your desktops virtualised then the remaining 20% are going to weigh you down sufficiently to negate many of the benefits. The people you need to start with are not the early IT adopters but the IT sceptics. You need to convince your “problem users” first, not last. Address all their niggles, or at least offer them tangible benfits to convince them to adopt and you have cleared your biggest hurdle. In that case you have a much better chance of reaching 100%.

Tomorrow, I hope I will cover the remaining two talks: Roy Illsley from Ovum “Market Trends: Is 2010 the tipping point for Desktop Virtualisation?” and Simon Bullers from RedPixie on “Implementing best practice for desktop virtualisation”.

Nothing to report

Wow, is that the date, this week has flown by. Nothing to report really, read a few more papers; one from Techtarget/bitpipe/Intel doing a performance comparison with an earlier test of streamed desktops versus virtual hosted machines.

Went to see the Pixies at their new office. Check out http://www.redpixie.co.uk.

Looking forward to the Desktop Virtualisation Forum in London on Monday http://www.desktopvirtualisationforum.com/DTVRTagenda.php.

Will post a report afterwards. Promise.

Vmotion meets JVMs

Last night I finished reading “Land of Dreams” by James P. Blaylock (book review blog in an alternate universe) and was just wondering whether to start reading another book but instead starting thinking about vmotion.

Wouldn’t it be great, I thought, if VMs or applications could be dynamically moved between different types of processor. Now, it’s pretty awesome that we can move VMs between the same type of processors at the moment, even if Vmware’s current vmotion technology is pretty fussy about processor types. But to move between different processors, architectures even, that would be a land of dreams.

Let’s not forget that a few years ago the prospect of moving a running OS from one machine to another was pretty far out. And that virtualisation is not a new concept in computing, the current virtualisation revolution is only an evolution of how to arrange layers of abstraction…that big onion that starts with transistors and ends up with Java or something.

So what would it take to move a running program or OS from one architecture to another? Well it would probably take another layer of abstraction, like say a virtual machine target, something that had an interpreted byte code language, something like a Java Virtual Machine. I believe the catchphrase of Java was (or is) write once, run anywhere? Well, if you had a program running on a JVM on an x86 architecture and the same JVM could run on a Sparc architecture, what’s to stop a piece of vmotion technology moving that application from one to the other?

Many things, probably, and I don’t know enough about JVMs (or vmotion) to even know what they would be. The question being, would they be show-stoppers which could not be overcome?

Anyway, seems like an interesting idea, which is partly the point of this blog even if a) no-one reads it and b) it makes me look silly. At the other extreme of course, if it is do-able, someone is probably trying or has done it already. And let me remember (showing my age again) the early 80’s when I was using the UCSD p-system. Which was an interpreted Pascal based virtual machine designed to run on many different micro-processors (booting from floppy disk). http://en.wikipedia.org/wiki/UCSD_Pascal. Java, new?

Found a new book to read “Shogun”. I wonder what ideas might come out of that?

Almost there

As I hone in on a home machine configuration (I won’t go so far as to call it a lab) I can offer this very useful link on Nehalem memory configuration.

http://www.delltechcenter.com/page/04-08-2009+-+Nehalem+and+Memory+Configurations.

There’s soooo much choice on the HP website it makes it hard to choose. Even more choice of processor on the Intel website. Things were much simpler when you had to choose between a BBC micro model “A” and a BBC micro model “B”.

Anyway, it’s looking like an HP ML330 G6 with twin quad core E5504 processors and 3 additional 2G RDIMMS. I’m thinking I can use the 2G UDIMM it comes with to keep the other processor happy.

That’s probably way too much power for a typical home machine but in a few years time I bet it will look old and slow.

I’ll post the cost of it all when I get management sign off, if you know what I mean.

Academia

The years I spent in academia, learning my craft, so to speak are full of good memories. It is disappointing that these days so much less research seems to be done in an academic environment. Looking for current research on virtualisation turns up papers from EMC, IBM, HP, SUN etc. Nothing wrong with that as very good research is no doubt done and researchers are (I suspect) better paid, funded and resourced than they would be in today’s typical academic institution.

Companies however, are fundamentally different institutions from Universities and are ultimately driven by the lure of profit. I have not been in touch with academia for many years but I would like to think that the ivory towers still exist to provide an environment where researchers can follow any idea for no other reason than it is interesting to them. Call me old fashioned but I think there is great value in that and I don’t see much evidence of that taking place these days.

Perhaps it does and I am just out of touch, having spent pretty much my whole career working for large companies.

IT is lucky. “Ideas” are relatively cheap and easy to try out, either empirically by writing some code, or building some hardware or by mathematical modelling. Cheaper than building a multi million dollar particle smasher under the French/Swiss border anyway. On the flip side, being cheap and easy encourages people to work more in isolation, the early development of Linux by Linus Torvalds being an example.

Sure, there are communities and user groups but these are typically geared towards specific products and the problems and future development of them.

It’s not a question of quantity. IT by it’s very nature is constantly innovating, evolving and producing new ideas. It is the quality of the ideas I am worried about. I suspect, if the state of academic research were improved, IT would be rewarded with some truly remarkable things.

Done some reading

Over a hectic bank holiday weekend juggling DIY, childminding and my wife’s business duties (that’s another story) I managed to finish reading both ends of the spectrum: Virtualisation for Dummies and the 2003 Cambridge paper on Xen. Both are quite dated now, the academic paper much more so.

I was hoping to find some useful material in the Dummies guide for potential talks but no such luck.  The Xen paper is interesting, kinda, but is very out of date to the point of historical interest only. In those early days they were following a paravirtualised route and modifying Linux (Xenolinux) and Windows as guests. However they backed the wrong horse really as so much progress was made in hypervisor technology that Vmware became the market leader. Modifying all those guest O/S’s each time there was an update? It was never going to fly.

While I mull over my lab, I will continue to read some academic papers. More up-to-date ones! Based on that I hope to write a summary of the state of the art. Now I know myself that I don’t have a good record of finishing these things (family/work excuses etc. etc.) so I’ll see how I get on.