An Ideal World

Brian Madden’s recent blog of Teradici (see links for his blog list) gave me a few ideas. IT is not all about design. There are evolutionary forces at play. Building blocks are designed and “systems” are (usually) designed. But in the soup, new configurations arise. Virtualisation arguably one of them, VDI particularly. After all you wouldn’t design a VDI solution by first inventing desktop processors and good graphics cards and functionally rich operating systems and then virtualise them so that you could change the way you use them. VDI in it’s current state is an evolved solution and VMware a tactical company, if you take the wider view. Ironically, while Teradici has to become more of a software company, it’s solution i.e. doing things in hardware is more strategic. Ultimately you can’t beat a bit of hardware! Ultimately you can’t do anything without a bit of hardware!

So I foresee more use of “new” hardware in VDI and virtual architectures to solve problems that can not otherwise be solved well in software. (Does that mean they won’t be virtual any more?) Hardware changes to support virtualisation are not without precedent: chips have been redesigned with VT extensions.

So as I am taking a really big world view today, why not consider what an “ideal” IT architecture evolved from VDI would look like. Something to aim for in 15 or 20 years time? First off I have to narrow the field. I’m going to be thinking mainly about corporate IT systems, which is where my experience lies. I’m thinking about large multi-national companies that could gain massively from efficiencies in their IT and whose employees use IT for the good of the company i.e. to do business, running and writing apps, crunching data and making a profit. Something “constructive”, say.

Consequently I’m not thinking about  IT architectures for home users and gamers (no disrespect).

There are some clear “ilities” on the list, in no specific order of priority yet. I will split them into two lists though. 1. Good things for users and 2. Good things for the people who run IT.

For the users:

  • Reliability. In an ideal world, users can always access their apps and data 24×7, 365 days.
  • Accessibility. In an ideal world, users can access their apps and data from anywhere, in an office, on a plane, at home and anyhere from Altoona, Barcelona, Cape Town to Zhengzou.
  • Security. In an ideal world, users are confident that only those who should be able to see the data can. And it never gets lost.
  • Performability (not a word? Ok Performance then). Regardless of where you are, the user interface is the same, and everything is the same speed, i.e. very fast.

For IT managers:

  • Power Efficient. In an ideal world, IT is so efficient it can all be run from renewable resources and there is no electricity bill.
  • Minimal Management. In an ideal world, each company has three IT staff, one in each region who can maintain the whole corporation. Anything that needs to be done more than once is automated. Hey, it’s an ideal world remember. If your company has doctors for it’s staff, I bet it doesn’t have more than one in each region.

The two points above mean that IT costs next to nothing, which in itself is of course a bonus to the business. All very over-simplified and somewhat meaningless at such a high level but I do want to break down some of the bullet points above into more detail and see how we might re-engineer ourselves into a Sci-Fi world where we get a bit closer.

That means thinking about backups, protocols, user interfaces, life-cycle and provisioning, data centres, storage and everything else that goes to make up IT.

For a future blog.

Should get my new system soon so expect something more geeky technical in the meantime.

Desktop Virtualisation Forum (cont.)

So, on to Roy Illsey of Ovum “Market Trends: Is 2010 the tipping point for Desktop Virtualisation?”. This technology is new and small. The global market for desktops is 600 million units. Virtualisation has about 1/2% of that market. But it’s not just about the technology, it’s about the process too. This is where there needs to be a change in mindset. There is a convergance in thinking between Vmware and Citrix and a growing “ecosystem”, as evidenced by this forum and others.

Last, but not least Simon Bullers CEO of RedPixie presented “Implementing best practice for desktop virtualisation”. His bullet points about how to deliver a successful desktop virtualisation project included:

  • Create a culture of teamwork. Think about whether you need a dedicated team or do as BAU. Get the Data Centre involved.
  • Create a culture of end users. Build positive PR within the organisation and an appetite for change.
  • Technology – Client. There are various types of client to consider, there are various types of application to consider. Build a demo lab.
  • Technology – Storage. Measure and optimise throughout the course of the project.
  • Technology – Platform. Server hardware, blades v racks. Type 1 or type 2 hypervisor. Blend?
  • Size for the peak users.
  • Process: spend time information gathering and planning. Decide on scheduling.
  • Agree the appetite for risk.
  • Process: difficult decisions, don’t get involved in a blame game.
  • Financials – it’s a mine field! Does it need to show an ROI? User chargebacks?

A quick executive summary: Windows 7 is the best reason to adopt so far but don’t play the funny numbers game (a reference I guess to the potential cost savings).

Many points to consider there and the forum as a whole very worthwhile. After this talk there was a panel Q&A session before lunch. I regret not being able to attend the afternoon breakout sessions where no doubt plenty of discussion took place and many thought provoking ideas developed from the morning themes.

To re-iterate what I said yesterday, I think to be truly successful, a desktop virtualisation project has to deliver 100%.  The overheads of maintaining two or more desktop platforms are going to kill any efficiencies quickly. That is why I would seek to convince the sceptics first and not start with the users who were already fans.

I was interested in the “Zero Client” device. This for me is the “ultimate” solution, or at least the most evolved of all the solutions currently in play. In some ways it is a direct descendant of a dumb terminal of the type Wyse manufactured 30 years ago. A serial line delivering ASCII characters has been replaced by the LAN or WAN delivering rich media over optimised protocols. I would put my money on these types of device to be the most successful as the field develops.

As to the next generation of device after that, I think it will naturally be led by advances in user interfaces in the field of human computer interaction. That field seems to have been quiet in recent years after the revolution in mice, graphics and workstations. Maybe that’s my cue to go and watch some more Sci-Fi movies…there must be another HCI revolution due soon?

Done some reading

Over a hectic bank holiday weekend juggling DIY, childminding and my wife’s business duties (that’s another story) I managed to finish reading both ends of the spectrum: Virtualisation for Dummies and the 2003 Cambridge paper on Xen. Both are quite dated now, the academic paper much more so.

I was hoping to find some useful material in the Dummies guide for potential talks but no such luck.  The Xen paper is interesting, kinda, but is very out of date to the point of historical interest only. In those early days they were following a paravirtualised route and modifying Linux (Xenolinux) and Windows as guests. However they backed the wrong horse really as so much progress was made in hypervisor technology that Vmware became the market leader. Modifying all those guest O/S’s each time there was an update? It was never going to fly.

While I mull over my lab, I will continue to read some academic papers. More up-to-date ones! Based on that I hope to write a summary of the state of the art. Now I know myself that I don’t have a good record of finishing these things (family/work excuses etc. etc.) so I’ll see how I get on.