Five years ago it was a laboratory wonder, a new-fangled way of data processing that only boffins and rocket scientists understood or could use.
Today, grid computing is making its way steadily into the mainstream as senior managements seek new ways of extracting more and better value from their computing resources.
Its progress is being smoothed by a string of positive examples of the way it can boost efficiency and cut costs. Higo Bank in Japan, for example, was concerned that its loan processing system was taking an inordinately long time to service current and potential customers.
The answer was to integrate three important databases – risk assessment, customer credit scoring and customer profile – using grid technology. The result was a 50 per cent reduction in the number of steps, the amount of time and the volume of paperwork needed to process a loan.
The consequence? Instant competitive advantage compared with rival lenders.
A company in Europe was able to improve one of its business processes as well as its overall systems efficiency as a consequence of the grid phenomenon.
The company, Magna Steyr, a leading European automobile parts supplier, built an application called “Clash”, a three dimensional simulator it uses in the design process to ensure that a new part does not interfere physically with existing fittings.
It took 72 hours to run, however, and was therefore slotted in at the end of the design process. If a problem was found, the designers had to go back to the beginning and start again.
Run on a grid system, it took four hours. “By reducing the time to four hours,” says Ken King, IBM’s head of grid computing, “the company was able to run the application nightly, changing the nature of the process from serial to iterative: it was able to make changes to designs on the fly, saving time and money.”
Charles Schwab, the US financial services group and a pioneer in the use of grid, had a portfolio management application that their customer service representatives used when their customers phoned up.
It ran an algorithm capable of spotting changes in the market and predicting the likely impact and risks. It was running on a Sun computer but not running fast enough. Customers could be left on the phone for four minutes or more – an unacceptable period in banking terms.
Run in a Linux-based grid environment, the system was providing answers in 15 seconds. As a consequence, Schwab was able to provide better customer service leading to better customer retention.
These examples of grid in action, all developed by IBM, illustrate the power of grid to improve the utilisation of computing resources, to accelerate response rates and give users better insights into the meaning of their data. IBM claims to have built between 300 and 500 grid systems.
Oracle, Sun and Dell are among other hardware and software manufacturers to have espoused grid principles. Grid computing, therefore, looks like the remedy par excellence for the computing ills of the 21st century.
But is it silver bullet or snake oil? How and why is it growing in popularity?
Thirty years ago, grid would have been described as “distributed computing”: the notion of computers and storage systems of different sizes and manufacture linked together to solve computing problems collaboratively.
At that time, neither hardware nor software were up to the task and so distributed computing remained an unrealised ideal. The advent of the internet, falling hardware costs and software advances laid the foundation for grid computing in the 1990s.
It first achieved success in tackling massive computational problems that were defeating conventional supercomputers – protein folding, financial modelling, earthquake simulation and the like.
But as pressures on data processing budgets grew through the 1990s and early part of this decade, it began to be seen as a way of enabling businesses to maximise flexibility while minimising hardware and software costs.
Companies today often own a motley collection of computing hardware and software: when budgets were looser it was not unusual to find companies buying a new computer simply to run a new, discrete application. In consequence, many companies today possess vast amounts of under-utilised computer power and storage capability. Some estimates suggest average utilisation is no greater than 10 to 15 per cent. A lot of companies have no idea how little they use the power of their computer systems.
This is expensive in capital utilisation, in efficiency and in power. Computation requires power; keeping machines on standby requires power; and keeping the machines cool requires even more power. Clive Longbottom of the IT consultancy Quocirca, points out that some years ago, a large company might have 100 servers (the modern equivalent of mainframe computers).
“Today the average is 1,200 and some companies have 12,000,” he says. “When the power failed and all you had was 100 servers it was hard enough trying to find an uninterruptible power supply which would keep you going for 15 minutes until the generator kicked in.”
Now with 12,000 servers you can’t keep them all alive. There’s no generator big enough unless you are next door to Sizewell B [the UK’s most modern nuclear power station].”
Mr Longbottom argues that the answer is to run the business on 5,000 servers, keep another 5,000 on standby and close the rest down.
This sets the rationale for grid: in simple terms, a company links all or some of its computers together using the internet or similar network so that it appears to the user as a single machine.
Specialised and highly complex software breaks applications down into units that are processed on the most suitable parts of what has become a “virtual” computer.
The company therefore keeps what resources it has and makes the best use of them.
It sounds simple. But in practice the software – developed by companies such as Platform Computing and Data Synapse – is complex and there are serious data management issues, especially where large grids are concerned.
And while the grid concept is understood more widely than a few years ago, there are still questions about the level of its acceptance.
This year, the pan European systems integrator Morse published a survey among UK IT directors that suggested most firms have no plans to try grid computing, claiming the technology is too costly, too complicated and too insecure. Quocirca, however, which has been following the growth of grid since 2003, argued in an analysis of the technology this year that: “We are seeing grid coming through its first incarnation as a high-performance computing platform for scientific and research areas, through highly specific computer grids for number crunching, to an acceptance by businesses that grid can be an architecture for business flexibility.”
Quocirca makes the important point that knowledge of Service Oriented Architectures (SOA), which many see as the answer to the increasing complexity of software creation, is poor among business computer users, while grid-type technologies are critical to the success of SOAs: “Without driving knowledge of SOA to a much higher level,” it argues, “we do not believe that enterprise grid computing can take off to the extent we believe it could.”
Today’s grids need not be overly complicated. Ken King of IBM pours cold water on the notion that a grid warrants the name only if different kinds of computer are involved and if open standards are employed throughout: “That’s a vision of where grid is going,” he scoffs.
“You can implement a simple grid as long as you take application workloads, and these can be single applications or multiple applications, and distribute them across multiple resources. These could be multiple blade nodes [blades are self-contained computer circuit boards that slot into servers] or multiple heterogeneous systems.”
“The workloads have to be scheduled according to your business requirements and your computing resources have to be adequately provisioned. You have continually to check to be sure you have the right resources to achieve the service level agreement associated with that workload. Processing a workload balanced across multiple resources is what I define as a grid,” he says.
To meet all these demands, IBM marshalls a battery of highly specialised software, much of it underpinned by Platform Computing and derived from its purchase of Tivoli Systems.
These include Tivoli provision manager, Tivoli intelligent orchestrator and Tivoli workload scheduler and the eWorkload manager that provides ene-to-end management and control.
Of course, none of this should be visible to the customer. But Mr King says grid automation is still a way off: “We are only in the first stages of customers getting comfortable with autonomic computing,” he says wryly.
“It is going to take two, three, four years before they are willing and able to yield up their data centre decision-making to the intelligence of the grid environment. But the more enterprises that implement grid and create competitive advantage from it, the more it will create a domino effect for other companies who will see they have to do the same thing. We are just starting to see that roll out.”
Virtualisation can bring an end to ‘server sprawl’
Virtualisation is, in principle, a simple concept. It is another way of getting multiple benefits from new technology: power-saving, efficiency, smaller physical footprint, flexibility.
It means taking advantage of the power of modern computers to run a number of operating systems – or multiple images of the same operating system – and the applications associated with them separately and securely.
But ask a virtualisation specialist for a definition, however, and you’ll get something like this: “It’s a base layer of capability that allows you to separate the hardware from the software. The idea is to be able to start to view servers and networking and storage as computing capacity, communications capacity and storage capacity. It’s the core underpinning of technology necessary to build any real utility computing environment.”
Even Wikipedia, the internet encyclopaedia, makes a slippery fist of it: “The process of presenting computer resources in ways that users and applications can easily get value out of them, rather than presenting them in a way dictated by their implementation, geographic location or physical packaging.”
It is accurate enough but is it clear?
To cut through the jargon that seems to cling to this topic like runny honey, here is an example of virtualisation at work.
Standard Life, the financial services company that floated on the London stock market this year, had been, over a 20-year period, adding to its battery of Intel-based servers in the time-honoured way. Every time a new application was required, a server was purchased.
By the beginning of 2005, according to Ewan Ferguson, the company’s technical project manager, it was running 370 physical Intel servers, each running a separate, individual application. Most of the servers were under-utilised; while a variety of operating systems were in use, including Linux, it was predominantly a Microsoft house – Windows 2000, 2003 and XP Desktop.
The company decided to go the virtualisation route using software from VMware, a wholly owned (but very independent) subsidiary of EMC Corporation, the world’s largest storage system vendor. VMWare, with its headquarters in Palo Alto, California, virtually (if you’ll excuse the pun) pioneered the concept. As a competitor accepted: “VMware built the virtualisation market place.”
By January 2006, Standard Life had increased the number of applications running on its systems to 550: the number of physical servers, however, had decreased by 20 to 350.
But why use virtualisation? Why not simply load up the underutilised machines?
Mr Ferguson explains: “If you are running a business-critical application and you introduce a second application on the same physical machine there are potential co-existence issues. Both applications may want full access to the processor at the same time. They may not have been programmed to avoid using the same memory space so they could crash the machine.
“What virtualisation enabled us to do was to make the best use of the physical hardware but without the technology headache of co-existing applications.”
And the benefits? Mr Ferguson points to faster delivery of service – a virtual machine is already in place when a new application is requested – better disaster recovery capability and less need for manual control of the systems: “By default now, any new application we install will be a virtual machine unless there is a very good reason why it has to be on dedicated hardware,” Mr Ferguson says.
While adoption of virtual solutions is still at an early stage, manufacturers of all levels of data processing equipment are increasingly placing their bets on the technology.
AMD, for example, the US-based processor manufacturer fighting to take market share from Intel, the market leader, has built virtualisation features into its next generation of “Opteron” processor chips.
Margaret Lewis, an AMD director, explains: “We have added some new instructions to the x86 instruction set [the hardwired commands built into the industry standard microprocessors] specifically for virtualisation sofware. And we have made some modifications to the underlying memory-handling system that makes it more efficient. Virtualisation is very memory intensive. We’re tuning the x86 to be a very effective virtualisation processor.”
Intel, of course, has its own virtualisation technology that enables PCs to run multiple operating systems in separate “containers”.
And virtualisation is not limited to the idea of running multiple operating systems on a single physical machine. SWsoft, an eight-year-old software house with headquarters in Herndon, Virginia, and 520 development staff in Russia, has developed a system it calls “Virtuozzo” that virtualises the operating system.
This means that within a single physical server the system creates a number of identical virtual operating systems: “It’s a way of curbing operating system ‘sprawl’,” says Colin Wright, SWsoft enterprise director, comparing it with “server sprawl”, which is one of the targets of VMware.
Worldwide, 100,000 physical servers are running 400,000 virtual operating systems under Virtuozzo. Each of the virtual operating systems behaves like a stand-alone server.
Mr Wright points out that with hardware virtualisation, a separate licence has to be bought for each operating system. With Virtuozzo, it seems only a single licence need be bought.
This does raise questions about licensing, especially where proprietary software such as Windows is involved. Mr Wright complains that clarification from Microsoft is slow in coming. “It’s a grey area,” he says, “the licensing bodies are dragging their heels.”
In fact, the growth of virtualisation seems certain to open can after can of of legal worms. Hard experience shows vendors are likely to blame each other for the failure of a multi-vendor project.
So who takes responsibility when applications are running on a virtual operating system in a virtual environment? The big fear is that it will be virtually no one.
Copyright The Financial Times Limited 2006
No comments:
Post a Comment