Friday, September 29, 2006

Gartner Announces New Portals, Content & Collaboration Summit 2006 (Contentmanager.net)

Gartner today announced it will launch a European annual Summit on the topic of Portals, Content & Collaboration. The new summit will be held from 2-3 October 2006 in London, United Kingdom.

Businesses everywhere are under pressure to accelerate performance and bring about better results, and according to Gartner, most need a fundamental shift in the way they use Information Technology (IT). The biggest revenue impact will come from IT projects that enable business growth by augmenting the behaviour of key knowledge workers and making them more innovative, creative and productive. To achieve this, capturing, managing and exploiting information that individuals and organisations possess have become a strategic priority. At this inaugural Gartner European Summit Gartner analysts and industry experts will tackle the most critical issues that are key to achieving this change and ultimately becoming what Gartner terms a ‘high-performance workplace’.

Debra Logan, conference chair and research vice president at Gartner said; "Today it’s all about high performance. The most successful organisations use superior information management to fuel creativity and innovation, translating into marketplace success. Effective business performance depends on the integration of people, processes and technology. Information, and the insight it provides, are the key ingredients to the biggest long-term success. To move forward companies need portals, content and collaboration tools, along with strategic vision and best practice insight".

During the Summit Gartner analysts will explore:
- Choosing and deploying state-of-the-art portal, content and collaboration technologies
- Managing content to minimize risk and exploit value
- Using technology to get everyone to work together
- Practices and behaviors to accelerate people's performance
- Measuring business impact as you bring systems, technologies and management best practices together
- The impact and role of emerging technologies, such collective intelligence, mashups, Web 2.0, folksonomies, e-discovery, Ajax,
- How people’s work behaviors and perspectives change over the next 10 years
- How mobile and wireless technologies and working practices will support collaboration
- If Knowledge Management is still relevant

Key Gartner presentations at the Summit include:
- High performance Workplace scenario: Top five actions to capitalise on change
- Sharepoint and Google: Right information to the right people at the right time?
- Portal Product Marketplace: The impact of consolidation
- Future of work in Europe: How people, processes and technology work together
- The collaboration scenario: Creating value and competitive advantage
- Enterprise Information Management: Getting business value from information assets
- The Future of Search: Where Information Access takes us all
- Compliance and e-discovery: What you Need to Know

Gartner thought leadership will be complemented by two external keynote speakers:
- Will Hutton, Chief Executive, The Work Foundation
- Ken Douglas, Technology Director, Chief Technology Office, BP International Ltd

Delegates will also learn from real-life experiences presented in a number of end-user case studies; six tutorials and a best practices session : Best Practices in Portal, Content Management, and Collaboration Development, Deployment and Management and a Gartner panel: Powerhouse vendors in the high performance workplace: Ladies and gentlemen, place your bets. The panel will combine the audience questions about the Powerhouse vendors and comments by Gartner’s global analysts. Other networking opportunities have been scheduled during the conference to ensure so delegates can discuss experiences, challenges and successes.Gartner organises 12 Summits in Europe each year on a wide range of IT industry topics. Each event features Gartner’s latest research and provides in-depth commentary by Gartner analysts, on-stage interviews with industry leaders, front-line case studies from across Europe and an opportunity to network with peers from across the region. In 2005, Gartner Summits in Europe attracted more than 3,300 delegates. For more information on Gartner events in the region please visit www.europe.gartner.com/events.
29.09.2006, Dorothee Stommel

Thursday, September 28, 2006

Apostles of the blogosphere (FT)

A few weeks ago I mentioned to a friend, who works in the “new media”, that I was to start a blog for FT.com. He was not impressed. “Blogging is over,” he informed me coldly.

I shrugged off the rebuke. After all blogs – personal online journals – are proliferating. According to Technorati, a firm that monitors such things, more than 50m blogs had been created by last month – and the number is doubling every six months.

My doubts returned, however, when I saw an ominous message on the website of Britain’s main opposition party: “Conservative Party enters the blogosphere”. It announced that David Cameron, Tory leader, had started a blog. When the world’s least fashionable political party discovers a social trend, it is surely a sign that it is peaking.

Mr Cameron is far from alone. Over the summer a strange array of politicians started blogging. They included Hillary Clinton, who hopes to be the next president of America; Lionel Jospin, who hopes to be the next president of France; and Mahmoud Ahmadi-Nejad, who is already president of Iran.

Political advisers around the world are clearly giving the same advice to their bosses. Blogging is meant to let politicians communicate directly with voters in a folksy style. In practice it makes aspiring statesmen sound like Mr Pooter, the character from Victorian fiction whose Diary of a Nobody was famous for its banality.

Mr Cameron’s entries from his recent visit to India have cheery little headlines, such as: “Going green in a Delhi tuk-tuk”. The Tory leader is shown around by a tour guide who is “a real character”; he sees the Delhi metro and pronounces it “amazing”. This kind of deadly dull stuff crosses the political divide. David Miliband, Britain’s clean-cut environment minister, got blogging earlier this year – claiming that this might help bridge “the growing and potentially dangerous gap between politicians and the public”. One of his most recent entries has the scintillating headline: “Three cheers for Brighton library”.

Mrs Clinton and Mr Jospin are saved from Pooterisms by their inability even to attempt chatty informality. By contrast, Mr Ahmadi-Nejad’s first blog was full of strange personal details. He notes, for example, that he did very well in his university entrance exams, in spite of suffering from a nosebleed. But after a promising debut in August, he has fallen silent – perhaps distracted by other tasks, such as governing the country and building a nuclear bomb.

Ferenc Gyurcsany , prime minister of Hungary, is more conscientious. He posts new comments on his blog most days – sometimes twice a day. He also has a dangerous frankness, making him a natural for the blogosphere. In a recent speech – now posted on his blog – he confessed to lying constantly to get elected; a revelation that prompted riots in Budapest.

Mr Gyurcsany’s blog is apparently a good read – if you have mastered Hungarian. But it is not clear that it has worked to his political advantage. In fact – for all the interest that consultants are showing in blogging – there is only one politician’s blog that has clearly had a real impact.
In France, Segolene Royal, who is likely to win the French Socialist party nomination to stand for the presidency next year, has been running a website and blog that has generated lots of interest and new support. Ms Royal puts essays on topics such as unemployment or immigration on her site and invites readers to post responses. She claims that she will then incorporate the best ideas into her platform for the presidency. It may be a gimmick, but it has helped her appear modern and in touch with the people – qualities in short supply in French politics.

The Royal experiment will certainly be watched with great interest by other politicians. But so far it seems to be a one-off.

That will hardly surprise the apostles of the blogosphere, however. They have always argued that blogging is politically significant, precisely because it is not a tool of the elite. Bloggers are, as a book on the phenomenon, An Army of Davids by Glenn Reynolds, puts it, holding the Goliaths of the media and the political world to account.

In the US, bloggers are claimed to have played a key role in forcing the resignation of Trent Lott as Senate majority leader in 2002, after he made comments that seemed to express nostalgia for the South in the days of segregation. It is argued that blogs kept the issue alive when the mainstream media was prepared to let it drop. The blogosphere is also said to have been crucial in mobilising support for Ned Lamont, an anti-war candidate, who defeated Senator Joe Lieberman in Connecticut’s Democratic primary in August.

In reality, it is hard to measure the precise impact of bloggers on such events. But the idea of an insurgent grass-roots movement, energised by folk tapping away at their computers, appeals to the romantic, anti-elitist strain in US politics. Many politicians in America and elsewhere clearly feel the need to pay their respects to the blogosphere – if only as a precaution.

It is not self-evident, however, that the blogosphere’s influence on politics is all for the good. A political consultant once complained that his bosses’ reliance on focus groups handed power to people who were prepared to sit around for hours talking about politics with strangers, in return for a free sandwich. Similarly if politics is increasingly shaped by the blogosphere, it will mean more power and influence for a sub-section of the population willing to waste hours trawling through dross on the internet.

Blogging as a medium has virtues: speed, spontaneity, interactivity and the vast array of information and expertise that millions of bloggers can bring together. But it also has its vices. The archetypal political blog favours instant response over reflection; commentary over original research; and stream-of-consciousness over structure.

Was that last judgment fair? Does it really follow logically from the rest of the argument? I am not sure and I have no time to think about it further. I have to get back to my blog.
gideon.rachman@ft.com

Copyright The Financial Times Limited 2006

Brussels attacked over Microsoft delay risk (FT)

The European Commission on Thursday came under attack over its antitrust battle with Microsoft, when members of the European Parliament and retailers warned that delays to the launch of the group’s new operating system would harm businesses.

Wednesday, September 27, 2006

Wolters puts educational arm up for review (FT)

Wolters Kluwer could reap about €600m ($760m) from selling its education business, analysts said on Wednesday as the Dutch publisher put the division up for review and unveiled new growth goals to round off a three-year restructuring process.

Informations-Management on demand! (Contentmanager.de)

In den letzten Monaten ist deutlich geworden: Niemand kann sich dem Software on demand-Trend widersetzen. Selbst bis vor kurzem noch kritische Softwarehäuser wie SAP und Microsoft bieten dieses Geschäftsmodell inzwischen an. Die Mainzer TNCS GmbH & Co. KG hat die Zukunftsfähigkeit des on demand-Konzepts bereits früh erkannt und gemeinsam mit ihrem Geschäftspartner, der aeveo it GmbH mit Sitz in Erlangen, das on demand-Portal www.office4business.de ins Leben gerufen - ganz nach dem Motto des TNCS-Geschäftsführers Thomas Hahner: "Wenn andere noch träumen, macht sich office4business bereits auf den Weg."

Bei Software on demand wird eine Softwareanwendung durch ein Dienstleistungsunternehmen, den Application Service Provider (ASP), betrieben und dem Kunden über öffentliche Netze, wie beispielsweise das Internet, angeboten. Interessant dabei ist die Verschiebung des Geschäftsrisikos. Da die benötigte Software nicht gekauft, sondern im Bedarfsfall über das Datennetz für die Nutzung angemietet wird, minimiert sich das Investitionsrisiko des Kunden. Oftmals werden bei dieser Auslagerung von Geschäftsprozessen nicht nur einzelne Anwendungen extern betrieben, sondern - wie bei dem Informations-Management-System Xbs-Client von TNCS - mehrere zusammengehörige Bereiche wie Dokumenten-, Workflow- und Kontaktmanagement.

Mit Hilfe von ASP-Dienstleistungen können Unternehmen demnach die Software ganzer Verwaltungsbereiche auslagern. Der Dienstleister kümmert sich um die komplette Administration, wie Softwarepflege, Aufrüstung, Lizenzen und optional eine Benutzerbetreuung.
27.09.2006, TNCS GmbH & Co. KG

Monday, September 25, 2006

The digital democracy's emerging elites (FT)

There are no prizes for guessing the most popular (and sought-after) types of internet enterprise at the moment. Anything that can be labelled Web 2.0 - social networks such as MySpace and Facebook, news aggregators such as Digg and Reddit and user-generated sites such as Wikipedia and Flickr - are the new new media.
Facebook is the latest such company to think of selling itself, with companies such as Yahoo and Viacom being asked to cough up $1bn (£526m). With News Corporation's purchase of MySpace last year for $580m now being regarded by Wall Street as a master stroke, other media companies are trawling for their own Web 2.0 acquisitions to transform themselves in the eyes of investors.
The hoo-hah over Web 2.0 companies is more than a matter of financial credibility. These companies, unlike most newspapers, magazines or television operations, do not employ professional writers, editors and producers to create material for their audience. Instead, they encourage their users both to contribute content and to select the most interesting things to display to others.
Old media "gatekeepers" (such as the people who edit this column) are out of fashion and what Jay Adelson, chief executive of Digg, calls "collective wisdom" is in. As Rupert Murdoch said last year of young internet users: "They don't want to rely on a god-like figure from above to tell them what's important . . . They want control over their media, instead of being controlled by it."
But such democratic rhetoric (what one critic has dubbed "digital Maoism") ignores one awkward fact. While anyone is free to launch a blog, contribute to Wikipedia or publish photographs on Flickr, a relatively small number of activists often dominate proceedings on Web 2.0 sites. Although they are unpaid, they can nonetheless achieve an elite status reminiscent of the old media's professional gatekeepers. An illuminating spat occurred last month at Digg, which encourages its 500,000 registered users to submit news stories from around the internet and vote for (or "digg") the most interesting. The most popular rise up its rankings and are displayed on its home page. It is the kind of thing that might have pleased the original Diggers, an 18th-century English sect that believed people should form self-governing communes. After protests that a group of about 20 Digg activists were promoting the stories they liked by supporting each other's choices, the site changed the algorithm that helps to rank stories. Kevin Rose, Digg's founder, said it would give more weight to "the unique digging diversity of the individuals digging the story" - in other words, make it harder for a cabal to distort the results.
That created outrage from the other side, with its activists complaining that they were being accused unfairly of cheating. Its top user, ranked by success in promoting stories to the home page, threatened to abandon the site. "I bequeath my measly number one position to whoever wants to reign," wrote the user known as P9. He was persuaded back and is still ranked at number two.
Mr Adelson insists that Digg is more democratic than some other sites because it is easy for anyone to contribute: users simply click to vote. Others, such as Wikipedia, demand more effort and have narrower participation. Jimmy Wales, the latter's founder, estimates that 70 per cent of editing is done by less than 2 per cent of registered users.
At one level, the fact that an elite often emerges within Web 2.0 sites is neither surprising nor sinister. The same thing can be seen in physical communities such as political parties. Relatively few have the patience or inclination to attend meetings and work on projects. The result is that groups of like-minded people who are particularly dedicated to the cause gradually gain dominance.
All the other slackers (or lurkers, as people who browse community sites for news and information without themselves contributing are known) gain a free ride at the expense of not controlling the agenda. "Things will always be done by the people who most want to do them. I don't think we will ever be shielded from that," says Clay Shirky, a consultant and academic.
But it does, as Nicholas Carr, a technology writer, says, "contradict a lot of the assumptions promulgated about the great egalitarianism of the web". There is not much of a logical distinction between someone who edits stories for money and someone who does so for recognition and social status. Indeed, Netscape has lured away some of the most active Digg users by paying them to submit stories to its site instead.
These are early days for Web 2.0 sites so it is difficult to predict the degree to which new media will come to look like old, with small groups of people filtering content for mass audiences. The optimistic view is that technology will make it so easy to switch among filters that gatekeepers will have less power. Digg already allows people to see stories that have been recommended by their friends rather than all of its users.
Still, the fact that there is an "A-list" of bloggers who garner a large proportion of internet links and traffic indicates that just because the web is an open medium it is not necessarily an egalitarian one. This generation of consumers has learnt to be sceptical about how information and entertainment is edited and filtered by groups of professionals. It ought to remain on its guard in the Web 2.0 world as well.
Copyright The Financial Times Limited 2006

Tuesday, September 19, 2006

Microsoft's Open-Source Promise Is All About the Future (Gartner)

Microsoft's Open Specification Promise will help clarify the use of certain Microsoft intellectual property in open-source initiatives. But it will have minimal immediate impact on the enterprise market.

Grid computing and virtualisation - are they money savers? (FT)

Five years ago it was a laboratory wonder, a new-fangled way of data processing that only boffins and rocket scientists understood or could use.

Today, grid computing is making its way steadily into the mainstream as senior managements seek new ways of extracting more and better value from their computing resources.

Its progress is being smoothed by a string of positive examples of the way it can boost efficiency and cut costs. Higo Bank in Japan, for example, was concerned that its loan processing system was taking an inordinately long time to service current and potential customers.

The answer was to integrate three important databases – risk assessment, customer credit scoring and customer profile – using grid technology. The result was a 50 per cent reduction in the number of steps, the amount of time and the volume of paperwork needed to process a loan.
The consequence? Instant competitive advantage compared with rival lenders.

A company in Europe was able to improve one of its business processes as well as its overall systems efficiency as a consequence of the grid phenomenon.

The company, Magna Steyr, a leading European automobile parts supplier, built an application called “Clash”, a three dimensional simulator it uses in the design process to ensure that a new part does not interfere physically with existing fittings.

It took 72 hours to run, however, and was therefore slotted in at the end of the design process. If a problem was found, the designers had to go back to the beginning and start again.

Run on a grid system, it took four hours. “By reducing the time to four hours,” says Ken King, IBM’s head of grid computing, “the company was able to run the application nightly, changing the nature of the process from serial to iterative: it was able to make changes to designs on the fly, saving time and money.”

Charles Schwab, the US financial services group and a pioneer in the use of grid, had a portfolio management application that their customer service representatives used when their customers phoned up.

It ran an algorithm capable of spotting changes in the market and predicting the likely impact and risks. It was running on a Sun computer but not running fast enough. Customers could be left on the phone for four minutes or more – an unacceptable period in banking terms.

Run in a Linux-based grid environment, the system was providing answers in 15 seconds. As a consequence, Schwab was able to provide better customer service leading to better customer retention.

These examples of grid in action, all developed by IBM, illustrate the power of grid to improve the utilisation of computing resources, to accelerate response rates and give users better insights into the meaning of their data. IBM claims to have built between 300 and 500 grid systems.

Oracle, Sun and Dell are among other hardware and software manufacturers to have espoused grid principles. Grid computing, therefore, looks like the remedy par excellence for the computing ills of the 21st century.

But is it silver bullet or snake oil? How and why is it growing in popularity?

Thirty years ago, grid would have been described as “distributed computing”: the notion of computers and storage systems of different sizes and manufacture linked together to solve computing problems collaboratively.

At that time, neither hardware nor software were up to the task and so distributed computing remained an unrealised ideal. The advent of the internet, falling hardware costs and software advances laid the foundation for grid computing in the 1990s.

It first achieved success in tackling massive computational problems that were defeating conventional supercomputers – protein folding, financial modelling, earthquake simulation and the like.
But as pressures on data processing budgets grew through the 1990s and early part of this decade, it began to be seen as a way of enabling businesses to maximise flexibility while minimising hardware and software costs.

Companies today often own a motley collection of computing hardware and software: when budgets were looser it was not unusual to find companies buying a new computer simply to run a new, discrete application. In consequence, many companies today possess vast amounts of under-utilised computer power and storage capability. Some estimates suggest average utilisation is no greater than 10 to 15 per cent. A lot of companies have no idea how little they use the power of their computer systems.

This is expensive in capital utilisation, in efficiency and in power. Computation requires power; keeping machines on standby requires power; and keeping the machines cool requires even more power. Clive Longbottom of the IT consultancy Quocirca, points out that some years ago, a large company might have 100 servers (the modern equivalent of mainframe computers).

“Today the average is 1,200 and some companies have 12,000,” he says. “When the power failed and all you had was 100 servers it was hard enough trying to find an uninterruptible power supply which would keep you going for 15 minutes until the generator kicked in.”

Now with 12,000 servers you can’t keep them all alive. There’s no generator big enough unless you are next door to Sizewell B [the UK’s most modern nuclear power station].”

Mr Longbottom argues that the answer is to run the business on 5,000 servers, keep another 5,000 on standby and close the rest down.

This sets the rationale for grid: in simple terms, a company links all or some of its computers together using the internet or similar network so that it appears to the user as a single machine.

Specialised and highly complex software breaks applications down into units that are processed on the most suitable parts of what has become a “virtual” computer.

The company therefore keeps what resources it has and makes the best use of them.
It sounds simple. But in practice the software – developed by companies such as Platform Computing and Data Synapse – is complex and there are serious data management issues, especially where large grids are concerned.

And while the grid concept is understood more widely than a few years ago, there are still questions about the level of its acceptance.

This year, the pan European systems integrator Morse published a survey among UK IT directors that suggested most firms have no plans to try grid computing, claiming the technology is too costly, too complicated and too insecure. Quocirca, however, which has been following the growth of grid since 2003, argued in an analysis of the technology this year that: “We are seeing grid coming through its first incarnation as a high-performance computing platform for scientific and research areas, through highly specific computer grids for number crunching, to an acceptance by businesses that grid can be an architecture for business flexibility.”

Quocirca makes the important point that knowledge of Service Oriented Architectures (SOA), which many see as the answer to the increasing complexity of software creation, is poor among business computer users, while grid-type technologies are critical to the success of SOAs: “Without driving knowledge of SOA to a much higher level,” it argues, “we do not believe that enterprise grid computing can take off to the extent we believe it could.”

Today’s grids need not be overly complicated. Ken King of IBM pours cold water on the notion that a grid warrants the name only if different kinds of computer are involved and if open standards are employed throughout: “That’s a vision of where grid is going,” he scoffs.

“You can implement a simple grid as long as you take application workloads, and these can be single applications or multiple applications, and distribute them across multiple resources. These could be multiple blade nodes [blades are self-contained computer circuit boards that slot into servers] or multiple heterogeneous systems.”

“The workloads have to be scheduled according to your business requirements and your computing resources have to be adequately provisioned. You have continually to check to be sure you have the right resources to achieve the service level agreement associated with that workload. Processing a workload balanced across multiple resources is what I define as a grid,” he says.

To meet all these demands, IBM marshalls a battery of highly specialised software, much of it underpinned by Platform Computing and derived from its purchase of Tivoli Systems.

These include Tivoli provision manager, Tivoli intelligent orchestrator and Tivoli workload scheduler and the eWorkload manager that provides ene-to-end management and control.

Of course, none of this should be visible to the customer. But Mr King says grid automation is still a way off: “We are only in the first stages of customers getting comfortable with autonomic computing,” he says wryly.

“It is going to take two, three, four years before they are willing and able to yield up their data centre decision-making to the intelligence of the grid environment. But the more enterprises that implement grid and create competitive advantage from it, the more it will create a domino effect for other companies who will see they have to do the same thing. We are just starting to see that roll out.”

Virtualisation can bring an end to ‘server sprawl’

Virtualisation is, in principle, a simple concept. It is another way of getting multiple benefits from new technology: power-saving, efficiency, smaller physical footprint, flexibility.

It means taking advantage of the power of modern computers to run a number of operating systems – or multiple images of the same operating system – and the applications associated with them separately and securely.

But ask a virtualisation specialist for a definition, however, and you’ll get something like this: “It’s a base layer of capability that allows you to separate the hardware from the software. The idea is to be able to start to view servers and networking and storage as computing capacity, communications capacity and storage capacity. It’s the core underpinning of technology necessary to build any real utility computing environment.”

Even Wikipedia, the internet encyclopaedia, makes a slippery fist of it: “The process of presenting computer resources in ways that users and applications can easily get value out of them, rather than presenting them in a way dictated by their implementation, geographic location or physical packaging.”

It is accurate enough but is it clear?

To cut through the jargon that seems to cling to this topic like runny honey, here is an example of virtualisation at work.

Standard Life, the financial services company that floated on the London stock market this year, had been, over a 20-year period, adding to its battery of Intel-based servers in the time-honoured way. Every time a new application was required, a server was purchased.

By the beginning of 2005, according to Ewan Ferguson, the company’s technical project manager, it was running 370 physical Intel servers, each running a separate, individual application. Most of the servers were under-utilised; while a variety of operating systems were in use, including Linux, it was predominantly a Microsoft house – Windows 2000, 2003 and XP Desktop.

The company decided to go the virtualisation route using software from VMware, a wholly owned (but very independent) subsidiary of EMC Corporation, the world’s largest storage system vendor. VMWare, with its headquarters in Palo Alto, California, virtually (if you’ll excuse the pun) pioneered the concept. As a competitor accepted: “VMware built the virtualisation market place.”

By January 2006, Standard Life had increased the number of applications running on its systems to 550: the number of physical servers, however, had decreased by 20 to 350.
But why use virtualisation? Why not simply load up the underutilised machines?

Mr Ferguson explains: “If you are running a business-critical application and you introduce a second application on the same physical machine there are potential co-existence issues. Both applications may want full access to the processor at the same time. They may not have been programmed to avoid using the same memory space so they could crash the machine.

“What virtualisation enabled us to do was to make the best use of the physical hardware but without the technology headache of co-existing applications.”

And the benefits? Mr Ferguson points to faster delivery of service – a virtual machine is already in place when a new application is requested – better disaster recovery capability and less need for manual control of the systems: “By default now, any new application we install will be a virtual machine unless there is a very good reason why it has to be on dedicated hardware,” Mr Ferguson says.

While adoption of virtual solutions is still at an early stage, manufacturers of all levels of data processing equipment are increasingly placing their bets on the technology.

AMD, for example, the US-based processor manufacturer fighting to take market share from Intel, the market leader, has built virtualisation features into its next generation of “Opteron” processor chips.

Margaret Lewis, an AMD director, explains: “We have added some new instructions to the x86 instruction set [the hardwired commands built into the industry standard microprocessors] specifically for virtualisation sofware. And we have made some modifications to the underlying memory-handling system that makes it more efficient. Virtualisation is very memory intensive. We’re tuning the x86 to be a very effective virtualisation processor.”

Intel, of course, has its own virtualisation technology that enables PCs to run multiple operating systems in separate “containers”.

And virtualisation is not limited to the idea of running multiple operating systems on a single physical machine. SWsoft, an eight-year-old software house with headquarters in Herndon, Virginia, and 520 development staff in Russia, has developed a system it calls “Virtuozzo” that virtualises the operating system.

This means that within a single physical server the system creates a number of identical virtual operating systems: “It’s a way of curbing operating system ‘sprawl’,” says Colin Wright, SWsoft enterprise director, comparing it with “server sprawl”, which is one of the targets of VMware.

Worldwide, 100,000 physical servers are running 400,000 virtual operating systems under Virtuozzo. Each of the virtual operating systems behaves like a stand-alone server.

Mr Wright points out that with hardware virtualisation, a separate licence has to be bought for each operating system. With Virtuozzo, it seems only a single licence need be bought.

This does raise questions about licensing, especially where proprietary software such as Windows is involved. Mr Wright complains that clarification from Microsoft is slow in coming. “It’s a grey area,” he says, “the licensing bodies are dragging their heels.”

In fact, the growth of virtualisation seems certain to open can after can of of legal worms. Hard experience shows vendors are likely to blame each other for the failure of a multi-vendor project.

So who takes responsibility when applications are running on a virtual operating system in a virtual environment? The big fear is that it will be virtually no one.
Copyright The Financial Times Limited 2006

They are the future – and they’re coming to a workplace near you (FT)

They are the future – and they’re coming to a workplace near you
By Lee Rainie
Published: September 19 2006 18:20 Last updated: September 19 2006 18:20

As consultant Marc Prensky calculates it, the life arc of a typical 21-year-old entering the workforce today has, on average, included 5,000 hours of video game playing, exchange of 250,000 e-mails, instant messages, and phone text messages, 10,000 hours of mobile phone use. To that you can add 3,500 hours of time online.

Friday, September 08, 2006

Open Text: Swimming With the Big Fish (AMR)

Open Text’s final results were in line with its July 5 preannouncement. Total 4Q06 revenue fell 3.8% to $105.2M year to year, and earnings per share were 16 cents on a GAAP basis. Open Text is among the top four enterprise content management (ECM) players, alongside EMC Documentum, FileNet, and IBM.

Wednesday, September 06, 2006

BEA WebLogic Real Time in neuer Version verfügbar - Java in höchster Performance für die Finanzwelt (contentmanager.de)

BEA Systems hat die Version WebLogic Real Time Core Edition 1.1 veröffentlicht. Dieses Produkt ermöglicht Java-Echtzeit-Anwendungen. Die vorhersehbare Antwortzeit ist jetzt dreifach schneller, die Lösung bietet in der Benchmark-Applikation eine Latenzzeit von höchstens 30 Millisekunden. Damit werden Java-Programme für Live-Umgebungen geeignet. Das Release steht unter http://www.bea.com/realtime zum Download zur Verfügung.

Tuesday, September 05, 2006

Community-Plattformen und Content Management – Teil 1/3 (contentmanager.de)

Teil 1: Communities und User-generated-Content Das Schlagwort "WEB 2.0" steht für technische als auch inhaltliche Trends. "Communities" und "User-generated-Content (UGC) sind dabei nicht nur eine Herausforderung für Portalbetreiber, sondern ebenso für Hersteller von CMS und Community-Software: Es gilt, beide Systeme zu einer homogenen Einheit zu formen. Anhand des schwedischen Portals http://www.contentmanager.de/_tools/urltracker.php?url=www.expressen.se wird in drei Teilen dargestellt, wie sich Communities und UGC auf Content Management auswirken.

Friday, September 01, 2006

Hummingbird Enterprise und RedDot XCMS werden als "Trend-Setting Products of 2006" ausgezeichnet (contentmanager.de)

Hummingbird wurde damit zum vierten Mal in Folge von Analysten, Anwendern und Redakteuren des KMWorld Magazin auf die Gewinnerliste der besten Produkte gewählt