Saturday, December 25, 2010

Sociale software en Microsoft SharePoint 2010

Nadat in de lente van dit jaar Microsoft SharePoint 2010 beschikbaar is gekomen, zijn er ondertussen al redelijk wat implementaties uitgevoerd. Tijdens het inrichten van de eerste SharePoint 2010 intranet omgevingen is duidelijk geworden dat steeds meer klanten vragen naar de mogelijkheden rond ‘knowledge (en social) networking’ in SharePoint 2010. In dit artikel zal de opkomst van ‘sociale software’ binnen organisaties worden besproken en de rol die SharePoint 2010 hierin kan spelen.


Het kan best lastig zijn om het begrip ‘sociale software' goed onder woorden te brengen. De voornaamste oorzaak hiervan is dat sociale software vele verschijningsvormen kent en omgeven is door buzzwords met vage definities. Daarnaast heeft de gemiddelde burger reeds zijn persoonlijke ervaringen met sociale software op het internet of kent het van vrienden of familie. Hierdoor is vaak een eenzijdig beeld ontstaan rond een dynamisch en veelzijdig fenomeen.

Geschiedenis
De basis van sociale software is gelegd rond de eeuwwisseling, met internet standaarden zoals XML, RSS en webservices. Hiermee is het fundament gelegd voor de ‘tweede generatie van het internet', (de afgelopen jaren aangeduid met de mode term ‘Web 2.0'). Zo'n 8 jaar geleden (tot 2002) was het internet toch voornamelijk een plek waar mensen (toen ruim een half miljard) informatie aan het consumeren waren, slechts een heel klein deel van de gebruikers publiceerde ook content op het internet. Rond het jaar 2003, 2 jaar na de ‘internet bubbel' kwam er een aantal zaken tezamen. De gemiddelde snelheid van een internetverbinding thuis nam snel toe en het aantal mensen op het internet verdubbelde binnen 3 jaar naar 1 miljard gebruikers. Daarnaast werden platformen zoals Blogger (weblogs) en MySpace (Social Networks) in rap tempo volwassen.
In de jaren die volgden is het gebruik van internet sites waarmee eenvoudig content (tekst, afbeeldingen, video, etc.) gedeeld kan worden extreem toegenomen. Denk hierbij aan het succes van online platformen zoals Facebook, Twitter en YouTube. De laatste jaren werden deze platformen zakelijk steeds meer gebruikt en zijn steeds meer bedrijven gaan onderzoeken hoe deze technieken binnen de organisatie kunnen worden toegepast.

De kracht van sociale software
Wat maakt sociale software waardevol en wat zijn de voordelen voor organisaties? De term ‘Social' (en services zoals Twitter en Facebook) hebben bij enkele mensen nog een negatieve associatie. Hier kunnen verschillende oorzaken aan ten grondslag liggen, zoals ervaringen uit het verleden of vooroordelen over ‘zinloze communicatie (chatten)'. Een ander feit is dat veel zelfgekroonde goeroes zichzelf omgedoopt hebben tot ‘social media experts' en een grote hoeveelheid ruis in de media is verkondigd.
De kracht van sociale software is redelijk eenvoudig te benoemen en is voornamelijk te vinden rond het onderwerp 'laagdrempeligheid kunnen delen'. Sociale software maakt het mogelijk om binnen een minuut een bericht te plaatsen dat leesbaar is voor een groot publiek of een video toe te voegen in een (Wiki) pagina. Een ander kan vervolgens eenvoudig een reactie plaatsen of een foto toevoegen waardoor gezamenlijk (zonder technische kennis) content gecreëerd kan worden. Hier ligt de kracht, en deze kracht is met SharePoint 2010 ook beschikbaar voor intern gebruik.

Veel organisaties hebben de afgelopen jaren tevergeefs geprobeerd een actieve strategie te voeren op het gebied van kennismanagement. In slecht enkele gevallen hebben deze inspanningen ook voldaan aan de initiële verwachtingen. Als er één gebied te benoemen is waar sociale software een grote bijdrage kan leveren, dan is dat wel op het gebied van kennismanagement.

"Sociale media heeft de afgelopen jaren gezorgd voor een explosie aan waardevolle informatie op het internet, en met de juiste aanpak is dit ook binnen organisaties mogelijk".

Sociale software biedt een platform gericht op het individu: 'de medewerker', waarbij het bij 'Intranet 1.0 software' aanvankelijk ging om het geven van een platform aan de afdeling Marketing en Communicatie. In de afgelopen jaren kregen ook andere afdelingen de mogelijkheid om content toe te voegen aan het intranet met behulp van Content Management Systemen. En ook al kwam er met behulp van deze systemen (zoals SharePoint) de mogelijkheid van het publiceren van documenten bij, het laagdrempelig kunnen delen van informatie door medewerkers was maar voor enkele bedrijven weggelegd. Met de komst van SharePoint 2010 komt hier een breed scala aan mogelijkheden bij, waarmee het vullen en verrijken van informatie binnen een intranet (of extranet) eindelijk echt eenvoudig en laagdrempelig begint te worden. Een goed doordachte informatiearchitectuur en implementatie is hierbij natuurlijk wel een belangrijke randvoorwaarde.

En ja, we moeten ook nog even stilstaan bij de sceptici. Laagdrempeligheid betekent natuurlijk ook dat mensen kunnen gaan melden wat ze tijdens de lunch hebben gegeten, eventueel aangevuld met een foto van het dagmenu en het voltallige kantinepersoneel. En ja, voor de meeste mensen binnen de organisatie zal deze informatie geen directe waarde hebben. Gelukkig bestaan er dan ook vele manieren om data te filteren, zoals op onderwerp of afdeling, waardoor iedereen zijn eigen informatiebehoefte kan samenstellen. Hiernaast bestaan er krachtige hulpmiddelen zoals 'tagging, rating en recommendations', die kunnen bijdragen aan het verkrijgen van waardevolle content. Als laatste is het zo dat binnen een sociaal intranet vanzelf ‘sociale omgangsvormen' optreden, zodra iedereen de ‘etiquette van interne sociale netwerken' leert en deze afspraken een onderdeel gaan vormen van de bedrijfscultuur.

Onderzoeksbureau Gartner heeft begin 2010 een rapport gepubliceerd met de titel "Social Software is an Enterprise Reality". In dit rapport doet Gartner de voorspelling dat omstreeks 2014 e-mail voor 20 procent zal zijn vervangen door sociale software als het voornaamste communicatiemiddel van professionals. De algemene verwachting is dat de komende jaren de meeste bedrijven sociale netwerken in gebruik zullen nemen. De meeste experts verwachten dat interne sociale netwerken effectiever zullen zijn dan e-mail voor sommige zakelijke toepassingen, zoals het communiceren van status updates binnen projecten en het lokaliseren van expertise.

SharePoint 2010
Met het aanschaffen van SharePoint 2010 wordt een gereedschapskist in huis gehaald welke een hoop waardevolle sociale software componenten bevat. SharePoint 2010 biedt verbeterde mogelijkheden rond Wiki's, Blogs en social bookmarks. Verder zijn de mogelijkheden rond gebruikersprofielen sterk verbeterd, waardoor functionaliteiten die we kennen van Facebook en Linkedin (zoals noteboards en newsfeeds) binnen de organisatie kunnen worden ingezet. Eén van de meeste krachtige toevoegingen aan SharePoint 2010 is de ‘Term Store', waarmee organisaties een ‘SharePoint brede' taxonomie en/of folksonomie kunnen opzetten.
De Taxonomie (hiërarchische classificatie methode) zal bekend zijn bij veel mensen, maar de folksonomie is een relatief nieuw begrip. Folksonomie is ontstaan tijdens de Web 2.0 revolutie en is een samentrekking van de woorden Folk (mensen) en Taxonomie. Het betreft hier een vorm van ordening op basis van consensus door het volk (de medewerkers). Het mooie van een folksonomie is dat het een ‘bottom-up' aanpak is voor het realiseren van een bedrijfsbrede taxonomie. In de praktijk zal een dergelijke aanpak vele malen sneller te realiseren zijn dan een Top-Down aanpak (en kan het veel uren vergaderen besparen). Een folksonomie kan (nadat hij goed gevuld is) binnen SharePoint wel centraal beheerd en opgeschoond worden. Binnen SharePoint kan ‘alles dat een URL heeft' (Sites, pagina's, documenten en andere objecten) getagged worden, waarbij de tags ook te herleiden zijn naar een persoon.

De sociale features van SharePoint 2010 kunnen voor organisaties belangrijke voordelen brengen, in eerste instantie op het gebied van transparantie en kennisdeling. Andere voordelen zijn er te behalen op het gebied van snellere en efficiëntere toegang tot expertise binnen de organisatie, het verlagen van communicatiekosten en het verbeteren van de (afdeling overstijgende) samenwerking. Al met al zaken waar de meeste organisaties op zitten te wachten...


reageer print stuur door
GERELATEERDE ARTIKELEN
24-01-2011 Dienstverleners zetten Sharepoint in de cloud
24-01-2011 Friesland kiest Sharepoint voor subsidies
17-11-2010 iLogix en Stealth koppelen Sharepoint aan storage


Read more: http://www.computable.nl/artikel/ict_topics/ecm/3618782/1277020/sociale-software-en-microsoft-sharepoint-2010.html#ixzz1C3Ro3cV5

Friday, December 17, 2010

Tech for Small Business

By Paul Taylor

Published: September 22 2010 10:19 | Last updated: September 22 2010 10:19

The closest thing Knight Corporate Finance has to an office is Home House, a members club in central London, where co-founders Paul Billingham and Adam Zoldan can host meetings if required. Other than that, all team members work from home, which Paul and Adam, both with young families, see as an advantage.

Knight Corporate Finance is a boutique business advisory firm which has advised on the sale of more than 20 companies in the IT and Telecoms industries in the UK since it was set up two years ago.

“Setting up our own business means that we do not have set working hours, but we do work a lot of hours,” says Paul who is based in Warrington - 200 miles north of Adam who lives in London. In addition to the two partners, there are three other employees.

“Working from home gives us the flexibility to be involved in family life and then launch straight back into business. Our team also benefits from that flexibility and we are lucky to have employees who are mature and responsible enough to motivate themselves and do what needs to be done.”

Because the partners are constantly visiting clients around the country, they decided that the business would not own any infrastructure – only devices – and that the IT services it required would be hosted using cloud-based infrastructure from specialist providers.

For example, the two partners have hosted IP telephony at home which allows them to redirect calls to their Manchester and London phone numbers, to their mobiles or to a landline at any location. If they are not able to take a call, it is routed to a live answering service where the call is answered in the company name, details taken and sent by email.

Smartphones are certainly the most used technology – Paul and Adam both have iPhones. “Being on the move a lot and working long hours means you need a device that enables you to connect in every way possible and do some non-work things,” says Adam. “The iPhone is essentially a consumer device I personally like that also connects me for work.”

Knight CF also uses a pay-as-you-go conference call service for client calls and the weekly team catch up. Email is provided by a hosted Microsoft Exchange server which can be accessed through PCs and mobile devices and allows the team to share calendars.

When it comes to documents, all information is stored and shared on a hosted Microsoft Sharepoint server – Knight CF’s central collaboration site. Sharepoint also allows client access to a secure team site - secure partitioned areas of the server accessed via the web that allows each client or approved party to share information and interact with Knight CF.

If work is stored on a user’s PC, it is backed up using an online back up service. Any working documents required on the move are saved on a cloud server so that they can be accessed from any device with a web connection and a browser. Wireless cellular antennae enable internet connection when out and about.

Adam has recently swapped his laptop for an iPad while on the move because he was experiencing shoulder ache from carrying the heavy laptop around all the time: “The PC is still essential for creating new presentations or in-depth spreadsheets but for viewing and making minor changes the iPad is fine. For email, internet and providing content it is perfect and for giving presentations it is excellent as it still provides the “wow” factor as an interesting new bit of kit.”

Because of firm’s business, the partners have a natural interest in telecoms and IT and are always on the look-out for new products for personal use and to enhance the way the business works. With everything hosted, Knight CF’s infrastructure is flexible and can easily be updated and changed if an aspect no longer works well or a new gadget catches their eye. They also know exactly how much it will cost each month.

Paul says: “Work is part of our lives and the boundaries between work and family life have become blurred. Hosted technology means we can access our virtual office as frequently or infrequently as we need and interchange with family life at different times of the day. We’re always working but also always living our lives – the two are not separate.”

Copyright The Financial Times Limited 2010. Print a single copy of this article for personal use. Contact us if you wish to print more to distribute to others.
..

An early future from Google

By Chris Nuttall in San Francisco

Published: December 16 2010 22:44 | Last updated: December 16 2010 22:44


Gingerbread man: the Nexus S is the first Android phone to have the 2.3 version of its operating system

William Gibson, the science fiction writer, could have been envisioning Google’s Mountain View headquarters when he said: “The future is already here – it’s just not very evenly distributed.”

The Googleplex is where a lot of the future is currently stacked up – from experiments with driverless cars to support for robots on the moon.

Google products have names inspired by sci-fi – the Nexus One phone using its Android operating system refers to the Nexus-6 androids in the film Blade Runner. A successor, the Nexus S, has just come out along with another piece of hardware – the CR-48 notebook, which sounds like a cross between R2-D2 and C-3PO, the robots in Star Wars. In fact, it stands for Chromium-48, an unstable isotope and an all too suitable name for a shaky prototype of its Chrome computing system, first announced in July last year.

Google has decided to distribute the future more evenly. Instead of concentrating on the bug-ridden and delayed product in its labs, it is offering 60,000 or so CR-48 notebooks to users in the US to help with its development.

Samsung Nexus S

Pros: Second-generation “Google phone” made to the company’s specification; light, vivid 4in screen; decent battery life; fast processor; good camera functions; first to feature improved Android Gingerbread operating system.

Cons: Pure Google phone, so lacks added layers and services offered by other handset makers and operators; handling of music, video, games inferior to iPhone.
..I gained a glimpse of what lies ahead with review units of the Nexus S and the CR-48 – and found that both contained intriguing possibilities.

The Nexus S is the first Android phone to have the 2.3 version of its operating system, codenamed Gingerbread. One new feature is support for near field communication technology: NFC can be used for contactless payments and exchanges of information such as sharing digital photos with a friend’s phone. Another use, previewed by the Nexus S in an app called Tags, allows users to hold their phones up to NFC-tagged objects and receive information from them such as text, pictures and links to websites.

Gingerbread also has a spiffier interface overall. Upgraded apps timed for its release include YouTube, which I found more responsive and fun to use than in my web browser. Settings for the 5Mp camera are more sophisticated and accessible, and there is better support for making internet calls and an improved on-screen keyboard experience.

Samsung Focus

Pros: Windows Phone 7 showcase smartphone; superb 4in Super Amoled screen; thin and light; excellent touch sensitivity; fast processor; enticing interface; HD video recording.

Cons: Limited number of apps; no Flash capability yet in Windows Phone 7.
..The Nexus S is made by Samsung – the original Nexus One was an HTC handset – and features the wonderful brightness of colours of its Super Amoled (active matrix organic light emitting diode) slightly curved 4in screen. It feels light for its size, has good battery life, excellent call quality and is very responsive with its fast 1Ghz processor. The Nexus S is on sale at Best Buy in the US ($529, $200 with a T-Mobile contract) and is available in Europe from Monday (free on a long-term contract, or for £550 without a contract, at Carphone Warehouse and Best Buy stores in the UK).

. . .

Like its predecessor, the Nexus S is “pure Google” – designed to Google’s specifications and as a showcase for its latest, greatest version of Android. It lacks the interface layers and features that handset makers and operators have added to Android on other handsets to make up for its shortcomings against the iPhone, which still handles music, video and games much better.

Apple iPhone 4

Pros: Retina screen has highest resolution of any smartphone; intuitive operating system with more than 200,000 apps; excellent music and gaming.

Cons: Browser not Flash-enabled; no capability to turn itself into a WiFi hotspot; screen looks small next to some Android rivals; still not available in white.
..This makes it hard to get excited about the Nexus S – it is an excellent smartphone, but it lacks a defining feature that would make it stand out from the growing Android crowd.

The same could be said of the CR-48 laptop – an ordinary black box of a notebook – but then it is meant to be a plain-looking machine for testing purposes only. However, the keyboard is one element of the design that is likely to appear in the two Chrome notebooks that Acer and Samsung are expected to launch in mid-2011.

The Caps Lock key has been re­placed with a search magnifying glass and the usual F1, F2 etc function keys along the top instead are symbols representing brightness, volume and the forward, back, reload, full screen, next win­­dow functions associated with a browser. This is because the Chrome OS is modelled on Google’s Chrome browser, with the idea that the web can become the operating system and the browser its desktop interface.

CR-48 Chrome notebook

This prototype is being given to 60,000 testers in the US to iron out the kinks of Google’s ambitious project to move all our computing tasks from local PCs to the web.

Pros: Decluttered notebook, thanks to the web browser operating as the operating system; instant on and off functionality; cheap, low-powered machine that is long on battery life.

Cons: Abandonment of desktop concept is hard to adjust to; internet connectivity is essential; web alternatives to tasks done locally are incomplete; the Chrome OS struggles to deal with everyday peripherals such as printers and scanners.
..This took some getting used to. I kept wanting to minimise the browser to see a familiar desktop with program icons. Instead, my programs were web applications whose icons appeared on the otherwise blank page when I selected “New Tab” in the browser. Default programs included YouTube, Gmail, Google Maps, Scratchpad – for taking quick notes – and a couple of games, where I could hit the Full Screen button and play as if I was not inside a browser window.

Google’s argument is that we spend so much of our computing life inside browsers that we may as well float off into cloud computing land, where tasks from e-mail to word pro­cessing and editing photos can al­ready be carried out. We will not need expensive computers and slow-loading operating systems bec­ause processing and storage can be handled in Goog­le’s data centres. The CR-48 has long battery life with only a low-power Atom processor and a 16Gb flash drive for minimal storage.

However, printing, scanning, editing pictures and video, recording audio, accessing local files either failed to work, needed keystroke combinations or took much longer, and depended on the speed of my internet connection.

Such is the problem of dragging the future into the present – the web’s infrastructure and our own working habits are not equipped to deal with such a dramatic shift just yet, where­as a Blade Runner Nexus-6 android would probably cope very well.

chris.nuttall@ft.com

Copyright The Financial Times Limited 2010. Print a single copy of this article for personal use. Contact us if you wish to print more to distribute to others.
...

Tuesday, December 07, 2010

What is your IT organisation doing to fuel workers’ passion?

By John Hagel and John Seely Brown

Published: December 7 2010 23:23 | Last updated: December 7 2010 23:23

Passion drives performance. What is your IT organisation doing to fuel passion at every level?

In our opening column, we talked about the decades-long decline in financial performance. Return on assets across all companies have fallen 75 per cent for all public companies in the US since 1965. A profoundly destabilising technology infrastructure is a big part of this transformation.

Another key metric in decline is passion. According to the just-released 2010 Shift Index, four of five workers surveyed are not passionate about their jobs.

Sure, they are working longer hours during the downturn, but that doesn’t mean they are engaged or that they will stick with you when the economy improves. Without truly passionate workers, companies will find it difficult to turn round the steady deterioration in financial performance.

Passionate workers are more likely to take challenges and transform them into opportunities.

But passionate workers are easily frustrated by institutional, technical, and cultural barriers that make it difficult to learn and connect with others.

With the right technology infrastructure, however, organisations can fuel rather than frustrate passion. Here’s how.

Disposition for passion

Passionate workers possess two valuable dispositions.

Questing: when asked how they react to challenges, passionate employees we surveyed most often responded that they see an opportunity to learn something or solve problems rather than viewing the unusual as a nuisance or a distraction.

Passionate workers seek out challenges to test their abilities, rather than waiting for them to surface. The passionate are twice as likely as disengaged workers to display this questing disposition.

As a leader, you want people with questing dispositions to move to the next level of performance improvement.

Connecting: Passionate workers have a strong desire to reach out and connect with others who can help them get better faster. We found passionate workers are twice as likely as disengaged workers to have a connecting disposition. They exchange knowledge outside the firm through conferences and social media much more often than workers who lack passion.

Our research suggests that effective knowledge exchange will be crucial to performance improvement.

These dispositions of questing and connecting reinforce each other – both positively and negatively. If you have a questing disposition, but you lack the ability to connect, you can’t learn new things as easily from others. If you have a connecting disposition, but can’t focus your attention on interesting challenges, you’re not as likely to use connections you establish to improve performance.

Implications for technology

Since these dispositions are increasingly central to sustained performance improvement, the question for IT organisations becomes how to create the conditions that support passionate workers.

Most IT organisations have a hard time facilitating people with connecting and questing dispositions. Many people inside big corporations, in particular, view enabling tools such as social media or cloud computing as toys, distractions, or security breaches. In fact, from our experience in discussions with a range of IT executives, most IT departments are ambivalent about, if not actively resisting, the next generation of technologies.

But to help workers pursue their passion, leaders must:

Change the mindset

Most executives are deeply suspicious of workers’ passions, unless they define passion simply as working longer hours to get the usual rote tasks done. Instead, passion is the quest for unexpected challenges. Questing and connecting are huge opportunities to drive performance improvement, if you can encourage and support these traits.

Identify relevant edges

The edges of your firm and your industry – whether geographic, demographic, or between companies – offer the environments where questing and connecting dispositions flourish.

Edges are fertile ground for innovation, attracting risk takers who can drive knowledge creation and economic growth. They are where the questing and the connecting dispositions have the most freedom. Find the edges with the most opportunity and the least resistance, and mobilise passionate people to these edges so they can attack performance challenges emerging there.

Deploy the right platforms and tools

New technology can significantly enhance the impact of passionate employees. Cloud computing, and the sophisticated analytic tools that can be accessed in the cloud, provide individuals with the resources they need to experiment and improvise in addressing performance challenges.

Rather than waiting in a long line to receive resources from a central IT organisation, employees can use the emerging cloud infrastructure and access everything from raw server capacity to sophisticated research tools. They can rapidly scale up and back IT resources and take promising approaches to market.

But it’s not just cloud computing. Passionate workers can now use social networks to stay in touch with a much larger group of individuals. Shared workspaces provide an increasingly rich environment for these individuals to connect with each other and others outside the firm jointly to develop promising approaches to difficult performance challenges.

In fact, these two categories of IT, cloud computing and social software, weave together in powerful ways to integrate both the questing and connecting dispositions of passionate workers. Employees begin to see the compounding effects of connecting with relevant and diverse expertise wherever it resides and combining that expertise with a rich array of IT resources to pursue challenging performance quests.

As passionate workers on the edge of the enterprise demonstrate the kind of impact they can achieve, less engaged workers start to see how much they can accomplish through their initiatives, and passion begins to build in them, as well. As the less engaged connect with more passionate workers, they manifest more of the questing and connecting dispositions. Passion starts to spread.

Emerging technologies play a central role in breaking down many of the institutional barriers that frustrate passionate workers. Rather than feeling blocked, these workers begin to feel more empowered. As passionate employees thrive, companies in turn will find themselves in a better position to deal with performance pressures. Instead of becoming a source of increasing stress, challenges become an opportunity for passionate workers to attain levels of performance never before possible.

John Hagel III, and John Seely Brown are co-chairman and independent co-chairman, respectively, of the Deloitte Center for the Edgew

Their books include The Power of Pull, The Only Sustainable Edge, Out of the Box, The Social Life of Information, Net Worth, and Net Gain.

Copyright The Financial Times Limited 2010. Print a single copy of this article for personal use. Contact us if you wish to print more to distribute to others.
..

Thursday, November 11, 2010

Sociale software en Microsoft SharePoint 2010

Nadat in de lente van dit jaar Microsoft SharePoint 2010 beschikbaar is gekomen, zijn er ondertussen al redelijk wat implementaties uitgevoerd. Tijdens het inrichten van de eerste SharePoint 2010 intranet omgevingen is duidelijk geworden dat steeds meer klanten vragen naar de mogelijkheden rond ‘knowledge (en social) networking’ in SharePoint 2010. In dit artikel zal de opkomst van ‘sociale software’ binnen organisaties worden besproken en de rol die SharePoint 2010 hierin kan spelen.

Het kan best lastig zijn om het begrip ‘sociale software' goed onder woorden te brengen. De voornaamste oorzaak hiervan is dat sociale software vele verschijningsvormen kent en omgeven is door buzzwords met vage definities. Daarnaast heeft de gemiddelde burger reeds zijn persoonlijke ervaringen met sociale software op het internet of kent het van vrienden of familie. Hierdoor is vaak een eenzijdig beeld ontstaan rond een dynamisch en veelzijdig fenomeen.

Geschiedenis

De basis van sociale software is gelegd rond de eeuwwisseling, met internet standaarden zoals XML, RSS en webservices. Hiermee is het fundament gelegd voor de ‘tweede generatie van het internet', (de afgelopen jaren aangeduid met de mode term ‘Web 2.0'). Zo'n 8 jaar geleden (tot 2002) was het internet toch voornamelijk een plek waar mensen (toen ruim een half miljard) informatie aan het consumeren waren, slechts een heel klein deel van de gebruikers publiceerde ook content op het internet. Rond het jaar 2003, 2 jaar na de ‘internet bubbel' kwam er een aantal zaken tezamen. De gemiddelde snelheid van een internetverbinding thuis nam snel toe en het aantal mensen op het internet verdubbelde binnen 3 jaar naar 1 miljard gebruikers. Daarnaast werden platformen zoals Blogger (weblogs) en MySpace (Social Networks) in rap tempo volwassen.
In de jaren die volgden is het gebruik van internet sites waarmee eenvoudig content (tekst, afbeeldingen, video, etc.) gedeeld kan worden extreem toegenomen. Denk hierbij aan het succes van online platformen zoals Facebook, Twitter en YouTube. De laatste jaren werden deze platformen zakelijk steeds meer gebruikt en zijn steeds meer bedrijven gaan onderzoeken hoe deze technieken binnen de organisatie kunnen worden toegepast.

De kracht van sociale software

Wat maakt sociale software waardevol en wat zijn de voordelen voor organisaties? De term ‘Social' (en services zoals Twitter en Facebook) hebben bij enkele mensen nog een negatieve associatie. Hier kunnen verschillende oorzaken aan ten grondslag liggen, zoals ervaringen uit het verleden of vooroordelen over ‘zinloze communicatie (chatten)'. Een ander feit is dat veel zelfgekroonde goeroes zichzelf omgedoopt hebben tot ‘social media experts' en een grote hoeveelheid ruis in de media is verkondigd.
De kracht van sociale software is redelijk eenvoudig te benoemen en is voornamelijk te vinden rond het onderwerp 'laagdrempeligheid kunnen delen'. Sociale software maakt het mogelijk om binnen een minuut een bericht te plaatsen dat leesbaar is voor een groot publiek of een video toe te voegen in een (Wiki) pagina. Een ander kan vervolgens eenvoudig een reactie plaatsen of een foto toevoegen waardoor gezamenlijk (zonder technische kennis) content gecreëerd kan worden. Hier ligt de kracht, en deze kracht is met SharePoint 2010 ook beschikbaar voor intern gebruik.
Veel organisaties hebben de afgelopen jaren tevergeefs geprobeerd een actieve strategie te voeren op het gebied van kennismanagement. In slecht enkele gevallen hebben deze inspanningen ook voldaan aan de initiële verwachtingen. Als er één gebied te benoemen is waar sociale software een grote bijdrage kan leveren, dan is dat wel op het gebied van kennismanagement.
"Sociale media heeft de afgelopen jaren gezorgd voor een explosie aan waardevolle informatie op het internet, en met de juiste aanpak is dit ook binnen organisaties mogelijk".
Sociale software biedt een platform gericht op het individu: 'de medewerker', waarbij het bij 'Intranet 1.0 software' aanvankelijk ging om het geven van een platform aan de afdeling Marketing en Communicatie. In de afgelopen jaren kregen ook andere afdelingen de mogelijkheid om content toe te voegen aan het intranet met behulp van Content Management Systemen. En ook al kwam er met behulp van deze systemen (zoals SharePoint) de mogelijkheid van het publiceren van documenten bij, het laagdrempelig kunnen delen van informatie door medewerkers was maar voor enkele bedrijven weggelegd. Met de komst van SharePoint 2010 komt hier een breed scala aan mogelijkheden bij, waarmee het vullen en verrijken van informatie binnen een intranet (of extranet) eindelijk echt eenvoudig en laagdrempelig begint te worden. Een goed doordachte informatiearchitectuur en implementatie is hierbij natuurlijk wel een belangrijke randvoorwaarde.
En ja, we moeten ook nog even stilstaan bij de sceptici. Laagdrempeligheid betekent natuurlijk ook dat mensen kunnen gaan melden wat ze tijdens de lunch hebben gegeten, eventueel aangevuld met een foto van het dagmenu en het voltallige kantinepersoneel. En ja, voor de meeste mensen binnen de organisatie zal deze informatie geen directe waarde hebben. Gelukkig bestaan er dan ook vele manieren om data te filteren, zoals op onderwerp of afdeling, waardoor iedereen zijn eigen informatiebehoefte kan samenstellen. Hiernaast bestaan er krachtige hulpmiddelen zoals 'tagging, rating en recommendations', die kunnen bijdragen aan het verkrijgen van waardevolle content. Als laatste is het zo dat binnen een sociaal intranet vanzelf ‘sociale omgangsvormen' optreden, zodra iedereen de ‘etiquette van interne sociale netwerken' leert en deze afspraken een onderdeel gaan vormen van de bedrijfscultuur.
Onderzoeksbureau Gartner heeft begin 2010 een rapport gepubliceerd met de titel "Social Software is an Enterprise Reality". In dit rapport doet Gartner de voorspelling dat omstreeks 2014 e-mail voor 20 procent zal zijn vervangen door sociale software als het voornaamste communicatiemiddel van professionals. De algemene verwachting is dat de komende jaren de meeste bedrijven sociale netwerken in gebruik zullen nemen. De meeste experts verwachten dat interne sociale netwerken effectiever zullen zijn dan e-mail voor sommige zakelijke toepassingen, zoals het communiceren van status updates binnen projecten en het lokaliseren van expertise.

SharePoint 2010

Met het aanschaffen van SharePoint 2010 wordt een gereedschapskist in huis gehaald welke een hoop waardevolle sociale software componenten bevat. SharePoint 2010 biedt verbeterde mogelijkheden rond Wiki's, Blogs en social bookmarks. Verder zijn de mogelijkheden rond gebruikersprofielen sterk verbeterd, waardoor functionaliteiten die we kennen van Facebook en Linkedin (zoals noteboards en newsfeeds) binnen de organisatie kunnen worden ingezet. Eén van de meeste krachtige toevoegingen aan SharePoint 2010 is de ‘Term Store', waarmee organisaties een ‘SharePoint brede' taxonomie en/of folksonomie kunnen opzetten.
De Taxonomie (hiërarchische classificatie methode) zal bekend zijn bij veel mensen, maar de folksonomie is een relatief nieuw begrip. Folksonomie is ontstaan tijdens de Web 2.0 revolutie en is een samentrekking van de woorden Folk (mensen) en Taxonomie. Het betreft hier een vorm van ordening op basis van consensus door het volk (de medewerkers). Het mooie van een folksonomie is dat het een ‘bottom-up' aanpak is voor het realiseren van een bedrijfsbrede taxonomie. In de praktijk zal een dergelijke aanpak vele malen sneller te realiseren zijn dan een Top-Down aanpak (en kan het veel uren vergaderen besparen). Een folksonomie kan (nadat hij goed gevuld is) binnen SharePoint wel centraal beheerd en opgeschoond worden. Binnen SharePoint kan ‘alles dat een URL heeft' (Sites, pagina's, documenten en andere objecten) getagged worden, waarbij de tags ook te herleiden zijn naar een persoon.
De sociale features van SharePoint 2010 kunnen voor organisaties belangrijke voordelen brengen, in eerste instantie op het gebied van transparantie en kennisdeling. Andere voordelen zijn er te behalen op het gebied van snellere en efficiëntere toegang tot expertise binnen de organisatie, het verlagen van communicatiekosten en het verbeteren van de (afdeling overstijgende) samenwerking. Al met al zaken waar de meeste organisaties op zitten te wachten...


Read more: http://www.computable.nl/artikel/ict_topics/ecm/3618782/1277020/sociale-software-en-microsoft-sharepoint-2010.html#ixzz14d2tbcEx

Moving out of recession: Small spending steps can bring big productivity leaps

By Stephen Pritchard

Published: October 27 2010 09:25 | Last updated: October 27 2010 09:25

As businesses emerged from the last recession, following the dotcom bust in 2001, the recovery in IT spending lagged behind.

Companies that had invested heavily during the good years found they had overspent on IT and had more than enough equipment to support their operations. It was 2004 before investment in technology recovered fully. By at least one measure, IT spending also became less effective during the dotcom induced downturn.

Businesses that had shed staff, or cut back other areas of their operations, found that their per capita IT costs increased.

Move forward to today, and a tentative economic recovery in most mature markets is once again putting a brake on IT spending. But businesses – as well as public sector organisations – are also being forced to look again at their cost bases, and IT is by no means immune from scrutiny.

At the same time, business leaders have to balance two competing demands: creating a leaner IT operation and creating a leaner business.

Although cutting budgets can produce quick savings, most enterprises spend only between 2 and 5 per cent of revenues (turnover) on technology; smaller companies, typically, will invest rather more.

But across the board, a small increase in IT spending can drive far greater gains in overall productivity.

“Steps towards recovery are still tentative,” cautions David Elton, an IT and change management expert at PA Consulting.

“The pressure on IT departments is still about money. There are signs that people are investing but most clients are still concerned about controlling costs.”

Boards remain cautious about a return to unfettered spending, where large sums of money seemingly vanished into long-term IT projects that failed, or failed to deliver the promised results.

This is prompting chief financial officers and chief information officers to look both at newer technologies, such as cloud computing, which can be deployed to reduce costs – and at improved methodologies for delivering IT services. In particular, there is growing interest in applying “lean” processes to IT.

“The CIO’s role is rapidly changing,” says Alexander Peters, a principal analyst at Forrester Research. “The recession accelerated this change but the drivers – social technologies, service oriented applications and the cloud – are strategic and require changes beyond tactical cost-cutting.”

Mr Peters is the co-author of a report that looks at how IT departments can apply “lean” thinking to their operations. In the report, he argues that CIOs can draw on methods developed in fields such as manufacturing, and use them to make IT not only cheaper, but more effective.

Lean thinking includes considering whether an enterprise should build or buy its IT infrastructure and services, moving on to newer, more efficient, platforms and making greater use of standardised processes.

But at its heart, Mr Peters argues, “lean” is about ensuring IT is more closely aligned to the business. This makes for more effective technology, and less waste.

“Best-practice executives view lean as a performance improvement strategy, rather than merely a cost-cutting exercise,” he says.

Bringing IT closer to the business, and ensuring it is more flexible and responsive, are key to lean thinking.

However, it also requires businesses to reconsider the way they run IT, both to cut costs and make it more responsive.

Moving to newer platforms and technologies should also provide businesses with a stronger foundation for a return to growth.

Strategies such as virtualisation – allowing a single computer to host multiple “virtual” machines – and server and storage consolidation, where those machines are run on fewer physical computers, will save money quite quickly, for businesses that have the expertise to implement them.

Some steps will require more initial investment. Installing computer and other equipment that draws less power can save significant sums over its lifetime, but businesses need to find the capital budgets for the hardware.

Research by IBM, for example, suggests that power consumption accounts for 75 per cent of data centre operating costs. Power costs are also growing much more rapidly than staffing, building or real estate expense, or taxes.

The cost of buying computer equipment, and of building data centres, is prompting more companies to look either at software as a service, outsourcing, or cloud computing.

IBM estimates that the construction cost of a 2,000 sq m data centre now runs to between $30m and $50m, putting it out of reach of all but the largest businesses or service providers.

Then there is the challenge of owning and running an asset based on technology that is both complex, and that rapidly becomes out of date.

A wholesale move to cloud computing might not be appropriate, although some commodity services, such as e-mail, archiving and software test and development, are already being hosted in the cloud for large businesses.

Frank Modruson, CIO of Accenture, the consultancy, points out that businesses with older and more complex IT infrastructures may have to update those before they can outsource the technology itself.

But making such investments is perhaps one of the few ways IT departments can free up cash to support new business initiatives, such as new online sales channels or social networking.

“Coming out of the recession, companies have started to redirect spending to the top line,” says Mark Hillman, vice-president for strategy at Compuware, an IT services company.

“They still have cost reduction initiatives in place, such as server consolidation, but they are limiting spending on the back office, to allow them to invest in areas that give them better connections to partners or customers, or in areas that affect their brand.”

Financial data for the Facebook generation
The financial services companies that buy the data services Thomson Reuters provide may have had a tough couple of years but they have not become less demanding.

Thomson Reuters provides financial market data to businesses including banks, brokerages and investment houses. The company supplies this information via traditional trading room terminals, but more traffic is being carried over the internet, in a business worth $15bn annually.

According to Kevin Blanco, vice-president of global application support and engineering at Thomson Reuters, ensuring clients receive good service across a worldwide network is a challenge.

As a data provider to fast-moving financial markets, Thomson Reuters has to meet two targets for its services: the availability and the responsiveness of data feeds.

This is especially critical for internet-based services, since it is these that are growing most quickly.

Thomson Reuters sets a target of 99.9 per cent “uptime” for its web-based products and a maximum eight-second response time.

“Connections over dedicated circuits are expensive,” explains Mr Blanco. “There are some large banks that require dedicated circuits and we maintain them. But the majority of our products and of our strategic initiatives will be web based. There will be very few dedicated workstation installs or dedicated circuits in the future.”

But newly cost-conscious bankers want to maintain service levels to customers and this places demands on the services they buy from suppliers such as Thomson Reuters.

For Mr Blanco, this means maintaining or improving service quality levels, while controlling costs.

Financial services companies have come to expect from web-based services the reliability and responsiveness they got from dedicated links, as well as the ease of use associated with sites such as Amazon or even social media sites.

“Our user base is no longer [just] financial professionals in their 40s and 50s. The primary user is a junior banker who also uses Facebook or MySpace. Our interface and speed have to match that demographic.”

Researchers who study consumers’ online behaviour have found that visitors to websites often abandon a transaction and go elsewhere if a page takes more than two seconds to respond.

“We are not seeing [demand for] two seconds now, but it is certainly four to five seconds,” says Mr Blanco. “But I do feel that the demand will continue for response times to compress, especially for transactional services.”

Controlling latency – the speed at which trades can be completed – and network quality for a company operating global services can be expensive and demand large numbers of skilled staff to diagnose and fix problems.

Like many other IT-dependent businesses, Thomson Reuters is increasingly relying on automation to cut the cost of delivering its technology.

Streamlining systems for updating services or deploying new applications to servers has cut support costs and, vitally, has improved system uptime.

And, Mr Blanco says, Thomson Reuters is making more use of automated monitoring and diagnostic tools to control the quality of its network.

In particular, web performance and monitoring software from specialist vendor Gomez has brought some rapid and significant improvements.

“In our corporate services business, we brought their website availability up to [99.9 per cent] in two months,” says Mr Blanco.

“We’ve also done the same for the rest of the business.”
IT, with its large fixed cost base and three to four year project life cycles, was not well placed to respond to relatively rapid changes in the business climate.


Copyright The Financial Times Limited 2010. Print a single copy of this article for personal use. Contact us if you wish to print more to distribute to others.

Cloud computing in businesses

By Richard Waters in San Francisco

Published: November 1 2010 00:15 | Last updated: November 1 2010 00:15

Cloud computing may be one on the most talked-about IT trends of recent years, but it has yet to make much of a mark inside big business. Like many new tech trends, the hype has far outweighed the business realities.

If that is to change, then it could well be projects such as recently undertaken by the tax division of ADP, the big US payroll processing company, that explain why.

Extracting data from its customers’ individual systems to prepare employee tax returns has been an expensive proposition, requiring separate engineering in the case of each customer to create the interface with ADP’s own systems.

As a result, it has only been economic to sell the tax filing service to large companies, typically with more than 1,000 employees, says Lori Schreiber, general manager of ADPs tax services division. But inserting a computing service delivered from the "cloud" into the middle of this process has now changed the economics of the business.

In ADP's case, the cloud service in question, from IBM, is a standardised way of "mapping" information from client systems so that it can be "read" by ADP's own systems.

As a result, says Ms Schreiber, ADP can now sell the tax filing service to medium-sized companies it could not profitably reach before. It has also been able to change the way it prices its service, potentially making it more attractive.

"It allowed us to promote it as more of a standard model, rather than charging for it as a professional service where we bill by the hour," Ms Schreiber says.

If cloud computing is to become more than an empty promise, it is this type of new business potential that will account for the shift.

IBM, which has just revamped its cloud computing strategy to base it around services like the one sold to ADP, says this highlights the way the new technology is likely to be felt in the day-to-day business world.

"Taking the operating cost out of service delivery" is one of the big opportunities for companies in many industries, says Mike Daniels, head of IBM's services division. The key, adds Erich Clementi, head of strategy for the company's cloud business, is the "extreme standardisation" made possible by the central delivery of a service. By streamlining Individual processes like this, businesses will be able to create more flexible services, and at a lower standard cost, he says.

As the ADP case suggests, this could open up new business opportunities. For companies in industries like telecommunications, financial services and media and entertainment, pushing some parts of their processes into the cloud will make it possible to "reach markets that weren't reachable before," says Frank Gens, an analyst at IDC. "It will become a fundamental part of the model for all companies trying to reach emerging markets."

Until now, most of the attention in cloud computing has been on the so-called "public clouds" run by companies like Amazon.com and Salesforce.com - centralised services where companies can buy computing resources in much the way they buy electricity.

Services like these have mainly appealed to start-up companies or those looking to create new businesses from scratch. Starting with a blank sheet of paper, designing a company's processes with no "on-premise" systems can be highly appealing.

But for most companies - with large sunk investments in IT systems and an understandable aversion to handing over control of their most important corporate data - this is too big a step to take.

Much of the focus of the big tech companies is now on refining these services to make them appeal to established companies. Mr Daniels compares it to the emergence of e-business in the early days of the internet: after a brief flurry of excitement over the potential of pure-play dotcoms to topple business leaders in many industries, the new technology was applied to enhance the operations of established businesses. It was Walmart, not Pets.com, that won the day, he says.

"The belief is, the money will really be in the enterprise loads, and no one has really untapped that yet," adds Paul Maritz, chief executive officer of VMware, which makes some of the key software for data centres that deliver cloud services.

The key to unlocking this potential are what the tech industry calls "hybrid clouds" - combinations of on-premise and remote, third-party systems that can be combined to create a service, much as ADP found with its tax-filing service.

To make this work, companies need to isolate individual processes that they can outsource, and accept a much higher level of standardisation in these areas, Mr Daniels says. He compares it to the standardisation that has already been imposed on many service functions inside companies, like human resources.

The same constraints are now being placed on the IT departments’ application programmers, he says. They will lose some choice in the platforms they build on and will have to choose from a narrower "catalogue" of IT services, but with significant benefits to their companies in terms of operating flexibility and cost.

These standardised services, in turn, will evolve to suit the needs of particular industries, bringing what IBM says will be a new addition to the IT lexicon: "industry clouds."

This is all a long way from the model of fully-outsourced, "public clouds that first drove interest of the new technology architecture. To the tech purists, it will smack of compromise, surrendering some of the scale benefits promised by fully centralised computing.

There's no question, you lose a lot of the economies of the public cloud," says Mr Gens. "As soon as you say ‘private', you're talking a higher price point."

Long term, that makes the full cloud computing model an appealing one. But for the foreseeable future, the gains seen by most businesses will come from more modest and achievable goals.

Copyright The Financial Times Limited 2010. Print a single copy of this article for personal use. Contact us if you wish to print more to distribute to others.

Tuesday, October 12, 2010

'We're at the edge of Web 3.0'

Investment: Semantic web applications could get built in New Brunswick based on work at the universities and the National Research Council, says Open Text executive
B1REBECCA PENTY
Telegraph-Journal

Stumble Upondel.icio.usDiggFacebookPrintEmailSpeak UpSAINT JOHN - Tom Jenkins says New Brunswick has the same opportunity with the semantic web that Waterloo, Ont. did when a local university there bet on software developed by some professors who spun it out in 1991 to create Open Text Corp. (TSX:OTC).

Enlarge Photo Kâté Braydon/Telegraph-JournalTom Jenkins, executive chairman and chief strategy officer of Waterloo-based Open Text - an enterprise software company that now employs 3,400 - spoke at the New Brunswick Innovation Forum Wednesday afternoon about the future of the World Wide Web. Just "a few million dollars" in investment in search engine software under development over a decade by the University of Waterloo helped create a firm that raked in US$785.7 million in revenue last year, he said.

Jenkins, the executive chairman and chief strategy officer of Waterloo-based Open Text - an enterprise software company that now employs 3,400 - spoke at the New Brunswick Innovation Forum Wednesday afternoon about the future of the World Wide Web.

The self-described serial entrepreneur, who got involved with Open Text three years after the company's formation, said in an interview, New Brunswick should bet on the semantic web expertise at its universities and at the National Research Council's Institute for Information Technology.

"We're going to be in Web 2.0 for many years to come but we're just at the edge of Web 3.0, which is referred to as the semantic web," Jenkins said.

The semantic web is all about helping computers find meaning in words to build connections - instead of humans having to surf the web to find what we're looking for.


The National Research Council formed the Semantic Web Lab (SemWebLab) in Fredericton in 2002 and the University of New Brunswick has several computer science professors working on semantic web technology development; Mount Allison University and Université de Moncton are also somewhat involved in the semantic web hub that is forming in the province.

"Where New Brunswick goes with that is it first of all starts to help the industries of New Brunswick apply this technology, not just the very cutting-edge of semantic but everything else that came before," Jenkins said.

Open Text uses existing semantic web tools, for example, to help the world's most prominent media organizations offer story suggestions to online readers based on what they've already read.

He said semantic web applications could get built in New Brunswick based on work at the universities and the National Research Council that would allow people to use the iPhone, iPad or BlackBerry more effectively.

"That's how software companies get formed. That's how Open Text got started," Jenkins said.

Open Text provided early web search tools for MSN and Yahoo - which launched in the mid-1990s - and calls itself "the original Google."

Today, the firm builds internal social networking sites (like Facebook) for companies.

Jenkins said one-third of all people on the web view content using Open Text technology - amounting to 500 million users - but just don't know it.

The executive, also the chairman of the Canadian Digital Media Network organized by the federal government to mould how Canada fits into the web's future, said New Brunswick could contribute by bringing forth its semantic web expertise.

That's part of why Jenkins was in New Brunswick, Wednesday.

His meetings included visits to the National Research Council, University of New Brunswick, and talks with such leaders as premier-elect David Alward and J.D. Irving, Limited president Jim Irving.

The Waterloo executive, on his first trip to the province, was also scoping out opportunities for Open Text.

"I'm here from a corporate point of view to understand what are the research opportunities, what are the investment opportunities, that sort of thing," he said, noting his firm has invested across Canada in research centres and has acquired half a dozen small companies to give them an international market channel.

Thursday, September 09, 2010

The Future Of Reading

By Jonah Lehrer September 8, 2010 | 10:59 pm | Categories: Frontal Cortex
I think it’s pretty clear that the future of books is digital. I’m sure we’ll always have deckle-edge hardcovers and mass market paperbacks, but I imagine the physical version of books will soon assume a cultural place analogous to that of FM radio. Although the radio is always there (and isn’t that nice?), I really only use it when I’m stuck in a rental car and forgot my auxilliary input cord. The rest of the time I’m relying on shuffle and podcasts.

I love books deeply. I won’t bore you with descriptions of my love other than to say that, when I moved back from England, I packed 9 pounds of clothes and 45 pounds of books in one of my checked bags. (I have a weakness for British covers.) And when my luggage was over the fifty pound airline limit, I started chucking T-shirts.

So I’m nervous about the rise of the Kindle and the Nook and the iBookstore. The book, after all, is a time-tested technology. We know that it can endure, and that the information we encode in volutes of ink on pulped trees can last for centuries. That’s why we still have Shakespeare Folios and why I can buy a 150 year old book on Alibris for 99 cents. There are so many old books!

And yet, I also recognize the astonishing potential of digital texts and e-readers. For me, the most salient fact is this: It’s never been easier to buy books, read books, or read about books you might want to buy. How can that not be good?

That said, I do have a nagging problem with the merger of screens and sentences. My problem is that consumer technology moves in a single direction: It’s constantly making it easier for us to perceive the content. This is why your TV is so high-def, and your computer monitor is so bright and clear. For the most part, this technological progress is all to the good. (I still can’t believe that people watched golf before there were HD screens. Was the ball even visible? For me, the pleasure of televised golf is all about the lush clarity of grass.) Nevertheless, I worry that this same impulse – making content easier and easier to see – could actually backfire with books. We will trade away understanding for perception. The words will shimmer on the screen, but the sentences will be quickly forgotten.

Let me explain. Stanislas Dehaene, a neuroscientist at the College de France in Paris, has helped illuminate the neural anatomy of reading. It turns out that the literate brain contains two distinct pathways for making sense of words, which are activated in different contexts. One pathway is known as the ventral route, and it’s direct and efficient, accounting for the vast majority of our reading. The process goes like this: We see a group of letters, convert those letters into a word, and then directly grasp the word’s semantic meaning. According to Dehaene, this ventral pathway is turned on by “routinized, familiar passages” of prose, and relies on a bit of cortex known as visual word form area (VWFA). When you are a reading a straightforward sentence, or a paragraph full of tropes and cliches, you’re almost certainly relying on this ventral neural highway. As a result, the act of reading seems effortless and easy. We don’t have to think about the words on the page.

But the ventral route is not the only way to read. The second reading pathway – it’s known as the dorsal stream – is turned on whenever we’re forced to pay conscious attention to a sentence, perhaps because of an obscure word, or an awkward subclause, or bad handwriting. (In his experiments, Dehaene activates this pathway in a variety of ways, such as rotating the letters or filling the prose with errant punctuation.) Although scientists had previously assumed that the dorsal route ceased to be active once we became literate, Deheane’s research demonstrates that even fluent adults are still forced to occasionally make sense of texts. We’re suddenly conscious of the words on the page; the automatic act has lost its automaticity.

This suggests that the act of reading observes a gradient of awareness. Familiar sentences printed in Helvetica and rendered on lucid e-ink screens are read quickly and effortlessly. Meanwhile, unusual sentences with complex clauses and smudged ink tend to require more conscious effort, which leads to more activation in the dorsal pathway. All the extra work – the slight cognitive frisson of having to decipher the words – wakes us up.

So here’s my wish for e-readers. I’d love them to include a feature that allows us to undo their ease, to make the act of reading just a little bit more difficult. Perhaps we need to alter the fonts, or reduce the contrast, or invert the monochrome color scheme. Our eyes will need to struggle, and we’ll certainly read slower, but that’s the point: Only then will we process the text a little less unconsciously, with less reliance on the ventral pathway. We won’t just scan the words – we will contemplate their meaning.

My larger anxiety has to do with the sprawling influence of technology. Sooner or later, every medium starts to influence the message. I worry that, before long, we’ll become so used to the mindless clarity of e-ink – to these screens that keep on getting better – that the technology will feedback onto the content, making us less willing to endure harder texts. We’ll forget what it’s like to flex those dorsal muscles, to consciously decipher a literate clause. And that would be a shame, because not every sentence should be easy to read.

Bonus point: I sometimes wonder why I’m only able to edit my own writing after it has been printed out, in 3-D form. My prose will always look so flawless on the screen, but then I read the same words on the physical page and I suddenly see all my clichés and banalities and excesses. Why is this the case? Why do I only notice my mistakes after they’re printed on dead trees? I think the same ventral/dorsal explanation applies. I’m so used to seeing my words on the screen – after all, I wrote them on the screen – that seeing them in a slightly different form provides enough tension to awake my dorsal stream, restoring a touch of awareness to the process of reading. And that’s when I get out my red pen.

Bonus bonus point: Perhaps the pleasure of reading on my Kindle – it’s so light in the hand, with such nicely rendered fonts – explains why it has quickly become an essential part of my sleep routine. The fact that it’s easier to read might explain why it’s also easier for me to fall asleep.



Read More http://www.wired.com/wiredscience/2010/09/the-future-of-reading-2/?intcid=inform_relatedContent#ixzz0z2H98lR7

The Web Is Dead. Long Live the Internet

The Web Is Dead. Long Live the Internet
By Chris Anderson and Michael Wolff August 17, 2010 | 9:00 am | Wired September 2010

Sources: Cisco estimates based on CAIDA publications, Andrew Odlyzko



The Web Is Dead? A Debate
How the Web Wins
How Do Native Apps and Web Apps Compare?Two decades after its birth, the World Wide Web is in decline, as simpler, sleeker services — think apps — are less about the searching and more about the getting. Chris Anderson explains how this new paradigm reflects the inevitable course of capitalism. And Michael Wolff explains why the new breed of media titan is forsaking the Web for more promising (and profitable) pastures.



Who’s to Blame:
Us
As much as we love the open, unfettered Web, we’re abandoning it for simpler, sleeker services that just work.
by Chris Anderson
You wake up and check your email on your bedside iPad — that’s one app. During breakfast you browse Facebook, Twitter, and The New York Times — three more apps. On the way to the office, you listen to a podcast on your smartphone. Another app. At work, you scroll through RSS feeds in a reader and have Skype and IM conversations. More apps. At the end of the day, you come home, make dinner while listening to Pandora, play some games on Xbox Live, and watch a movie on Netflix’s streaming service.

You’ve spent the day on the Internet — but not on the Web. And you are not alone.

This is not a trivial distinction. Over the past few years, one of the most important shifts in the digital world has been the move from the wide-open Web to semiclosed platforms that use the Internet for transport but not the browser for display. It’s driven primarily by the rise of the iPhone model of mobile computing, and it’s a world Google can’t crawl, one where HTML doesn’t rule. And it’s the world that consumers are increasingly choosing, not because they’re rejecting the idea of the Web but because these dedicated platforms often just work better or fit better into their lives (the screen comes to them, they don’t have to go to the screen). The fact that it’s easier for companies to make money on these platforms only cements the trend. Producers and consumers agree: The Web is not the culmination of the digital revolution.

A decade ago, the ascent of the Web browser as the center of the computing world appeared inevitable. It seemed just a matter of time before the Web replaced PC application software and reduced operating systems to a “poorly debugged set of device drivers,” as Netscape cofounder Marc Andreessen famously said. First Java, then Flash, then Ajax, then HTML5 — increasingly interactive online code — promised to put all apps in the cloud and replace the desktop with the webtop. Open, free, and out of control.

But there has always been an alternative path, one that saw the Web as a worthy tool but not the whole toolkit. In 1997, Wired published a now-infamous “Push!” cover story, which suggested that it was time to “kiss your browser goodbye.” The argument then was that “push” technologies such as PointCast and Microsoft’s Active Desktop would create a “radical future of media beyond the Web.”

“Sure, we’ll always have Web pages. We still have postcards and telegrams, don’t we? But the center of interactive media — increasingly, the center of gravity of all media — is moving to a post-HTML environment,” we promised nearly a decade and half ago. The examples of the time were a bit silly — a “3-D furry-muckers VR space” and “headlines sent to a pager” — but the point was altogether prescient: a glimpse of the machine-to-machine future that would be less about browsing and more about getting.
Who’s to Blame:
Them
Chaos isn’t a business model. A new breed of media moguls is bringing order — and profits — to the digital world.
by Michael Wolff
An amusing development in the past year or so — if you regard post-Soviet finance as amusing — is that Russian investor Yuri Milner has, bit by bit, amassed one of the most valuable stakes on the Internet: He’s got 10 percent of Facebook. He’s done this by undercutting traditional American VCs — the Kleiners and the Sequoias who would, in days past, insist on a special status in return for their early investment. Milner not only offers better terms than VC firms, he sees the world differently. The traditional VC has a portfolio of Web sites, expecting a few of them to be successes — a good metaphor for the Web itself, broad not deep, dependent on the connections between sites rather than any one, autonomous property. In an entirely different strategic model, the Russian is concentrating his bet on a unique power bloc. Not only is Facebook more than just another Web site, Milner says, but with 500 million users it’s “the largest Web site there has ever been, so large that it is not a Web site at all.”

According to Compete, a Web analytics company, the top 10 Web sites accounted for 31 percent of US pageviews in 2001, 40 percent in 2006, and about 75 percent in 2010. “Big sucks the traffic out of small,” Milner says. “In theory you can have a few very successful individuals controlling hundreds of millions of people. You can become big fast, and that favors the domination of strong people.”

Milner sounds more like a traditional media mogul than a Web entrepreneur. But that’s exactly the point. If we’re moving away from the open Web, it’s at least in part because of the rising dominance of businesspeople more inclined to think in the all-or-nothing terms of traditional media than in the come-one-come-all collectivist utopianism of the Web. This is not just natural maturation but in many ways the result of a competing idea — one that rejects the Web’s ethic, technology, and business models. The control the Web took from the vertically integrated, top-down media world can, with a little rethinking of the nature and the use of the Internet, be taken back.

This development — a familiar historical march, both feudal and corporate, in which the less powerful are sapped of their reason for being by the better resourced, organized, and efficient — is perhaps the rudest shock possible to the leveled, porous, low-barrier-to-entry ethos of the Internet Age. After all, this is a battle that seemed fought and won — not just toppling newspapers and music labels but also AOL and Prodigy and anyone who built a business on the idea that a curated experience would beat out the flexibility and freedom of the Web.




Illustration: Dirk Fowler


As it happened, PointCast, a glorified screensaver that could inadvertently bring your corporate network to its knees, quickly imploded, taking push with it. But just as Web 2.0 is simply Web 1.0 that works, the idea has come around again. Those push concepts have now reappeared as APIs, apps, and the smartphone. And this time we have Apple and the iPhone/iPad juggernaut leading the way, with tens of millions of consumers already voting with their wallets for an app-led experience. This post-Web future now looks a lot more convincing. Indeed, it’s already here.

The Web is, after all, just one of many applications that exist on the Internet, which uses the IP and TCP protocols to move packets around. This architecture — not the specific applications built on top of it — is the revolution. Today the content you see in your browser — largely HTML data delivered via the http protocol on port 80 — accounts for less than a quarter of the traffic on the Internet … and it’s shrinking. The applications that account for more of the Internet’s traffic include peer-to-peer file transfers, email, company VPNs, the machine-to-machine communications of APIs, Skype calls, World of Warcraft and other online games, Xbox Live, iTunes, voice-over-IP phones, iChat, and Netflix movie streaming. Many of the newer Net applications are closed, often proprietary, networks.

And the shift is only accelerating. Within five years, Morgan Stanley projects, the number of users accessing the Net from mobile devices will surpass the number who access it from PCs. Because the screens are smaller, such mobile traffic tends to be driven by specialty software, mostly apps, designed for a single purpose. For the sake of the optimized experience on mobile devices, users forgo the general-purpose browser. They use the Net, but not the Web. Fast beats flexible.

This was all inevitable. It is the cycle of capitalism. The story of industrial revolutions, after all, is a story of battles over control. A technology is invented, it spreads, a thousand flowers bloom, and then someone finds a way to own it, locking out others. It happens every time.

Take railroads. Uniform and open gauge standards helped the industry boom and created an explosion of competitors — in 1920, there were 186 major railroads in the US. But eventually the strongest of them rolled up the others, and today there are just seven — a regulated oligopoly. Or telephones. The invention of the switchboard was another open standard that allowed networks to interconnect. After telephone patents held by AT&T’s parent company expired in 1894, more than 6,000 independent phone companies sprouted up. But by 1939, AT&T controlled nearly all of the US’s long-distance lines and some four-fifths of its telephones. Or electricity. In the early 1900s, after the standardization to alternating current distribution, hundreds of small electric utilities were consolidated into huge holding companies. By the late 1920s, the 16 largest of those commanded more than 75 percent of the electricity generated in the US.

Indeed, there has hardly ever been a fortune created without a monopoly of some sort, or at least an oligopoly. This is the natural path of industrialization: invention, propagation, adoption, control.

Now it’s the Web’s turn to face the pressure for profits and the walled gardens that bring them. Openness is a wonderful thing in the nonmonetary economy of peer production. But eventually our tolerance for the delirious chaos of infinite competition finds its limits. Much as we love freedom and choice, we also love things that just work, reliably and seamlessly. And if we have to pay for what we love, well, that increasingly seems OK. Have you looked at your cell phone or cable bill lately?

As Jonathan L. Zittrain puts it in The Future of the Internet — And How to Stop It, “It is a mistake to think of the Web browser as the apex of the PC’s evolution.” Today the Internet hosts countless closed gardens; in a sense, the Web is an exception, not the rule.
The truth is that the Web has always had two faces. On the one hand, the Internet has meant the breakdown of incumbent businesses and traditional power structures. On the other, it’s been a constant power struggle, with many companies banking their strategy on controlling all or large chunks of the TCP/IP-fueled universe. Netscape tried to own the homepage; Amazon.com tried to dominate retail; Yahoo, the navigation of the Web.

Google was the endpoint of this process: It may represent open systems and leveled architecture, but with superb irony and strategic brilliance it came to almost completely control that openness. It’s difficult to imagine another industry so thoroughly subservient to one player. In the Google model, there is one distributor of movies, which also owns all the theaters. Google, by managing both traffic and sales (advertising), created a condition in which it was impossible for anyone else doing business in the traditional Web to be bigger than or even competitive with Google. It was the imperial master over the world’s most distributed systems. A kind of Rome.

In an analysis that sees the Web, in the description of Interactive Advertising Bureau president Randall Rothenberg, as driven by “a bunch of megalomaniacs who want to own the entirety of the world,” it is perhaps inevitable that some of those megalomaniacs began to see replicating Google’s achievement as their fundamental business challenge. And because Google so dominated the Web, that meant building an alternative to the Web.


Enter Facebook. The site began as a free but closed system. It required not just registration but an acceptable email address (from a university, or later, from any school). Google was forbidden to search through its servers. By the time it opened to the general public in 2006, its clublike, ritualistic, highly regulated foundation was already in place. Its very attraction was that it was a closed system. Indeed, Facebook’s organization of information and relationships became, in a remarkably short period of time, a redoubt from the Web — a simpler, more habit-forming place. The company invited developers to create games and applications specifically for use on Facebook, turning the site into a full-fledged platform. And then, at some critical-mass point, not just in terms of registration numbers but of sheer time spent, of habituation and loyalty, Facebook became a parallel world to the Web, an experience that was vastly different and arguably more fulfilling and compelling and that consumed the time previously spent idly drifting from site to site. Even more to the point, Facebook founder Mark Zuckerberg possessed a clear vision of empire: one in which the developers who built applications on top of the platform that his company owned and controlled would always be subservient to the platform itself. It was, all of a sudden, not just a radical displacement but also an extraordinary concentration of power. The Web of countless entrepreneurs was being overshadowed by the single entrepreneur-mogul-visionary model, a ruthless paragon of everything the Web was not: rigid standards, high design, centralized control.

Striving megalomaniacs like Zuckerberg weren’t the only ones eager to topple Google’s model of the open Web. Content companies, which depend on advertising to fund the creation and promulgation of their wares, appeared to be losing faith in their ability to do so online. The Web was built by engineers, not editors. So nobody paid much attention to the fact that HTML-constructed Web sites — the most advanced form of online media and design — turned out to be a pretty piss-poor advertising medium.

For quite a while this was masked by the growth of the audience share, followed by an ever-growing ad-dollar share, until, about two years ago, things started to slow down. The audience continued to grow at a ferocious rate — about 35 percent of all our media time is now spent on the Web — but ad dollars weren’t keeping pace. Online ads had risen to some 14 percent of consumer advertising spending but had begun to level off. (In contrast, TV — which also accounts for 35 percent of our media time, gets nearly 40 percent of ad dollars.)



Monopolies are actually even more likely in highly networked markets like the online world. The dark side of network effects is that rich nodes get richer. Metcalfe’s law, which states that the value of a network increases in proportion to the square of connections, creates winner-take-all markets, where the gap between the number one and number two players is typically large and growing.


So what took so long? Why wasn’t the Web colonized by monopolists a decade ago? Because it was in its adolescence then, still innovating quickly with a fresh and growing population of users always looking for something new. Network-driven domination was short-lived. Friendster got huge while social networking was in its infancy, and fickle consumers were still keen to experiment with the next new thing. They found another shiny service and moved on, just as they had abandoned SixDegrees.com before it. In the expanding universe of the early Web, AOL’s walled garden couldn’t compete with what was outside the walls, and so the walls fell.

But the Web is now 18 years old. It has reached adulthood. An entire generation has grown up in front of a browser. The exploration of a new world has turned into business as usual. We get the Web. It’s part of our life. And we just want to use the services that make our life better. Our appetite for discovery slows as our familiarity with the status quo grows.

Blame human nature. As much as we intellectually appreciate openness, at the end of the day we favor the easiest path. We’ll pay for convenience and reliability, which is why iTunes can sell songs for 99 cents despite the fact that they are out there, somewhere, in some form, for free. When you are young, you have more time than money, and LimeWire is worth the hassle. As you get older, you have more money than time. The iTunes toll is a small price to pay for the simplicity of just getting what you want. The more Facebook becomes part of your life, the more locked in you become. Artificial scarcity is the natural goal of the profit-seeking.
What’s more, there was the additionally sobering and confounding fact that an online consumer continued to be worth significantly less than an offline one. For a while, this was seen as inevitable right-sizing: Because everything online could be tracked, advertisers no longer had to pay to reach readers who never saw their ads. You paid for what you got.

Unfortunately, what you got wasn’t much. Consumers weren’t motivated by display ads, as evidenced by the share of the online audience that bothered to click on them. (According to a 2009 comScore study, only 16 percent of users ever click on an ad, and 8 percent of users accounted for 85 percent of all clicks.) The Web might generate some clicks here and there, but you had to aggregate millions and millions of them to make any money (which is what Google, and basically nobody else, was able to do). And the Web almost perversely discouraged the kind of systematized, coordinated, focused attention upon which brands are built — the prime, or at least most lucrative, function of media.

What’s more, this medium rendered powerless the marketers and agencies that might have been able to turn this chaotic mess into an effective selling tool — the same marketers and professional salespeople who created the formats (the variety shows, the 30- second spots, the soap operas) that worked so well in television and radio. Advertising powerhouse WPP, for instance, with its colossal network of marketing firms — the same firms that had shaped traditional media by matching content with ads that moved the nation — may still represent a large share of Google’s revenue, but it pales next to the greater population of individual sellers that use Google’s AdWords and AdSense programs.



There is an analogy to the current Web in the first era of the Internet. In the 1990s, as it became clear that digital networks were the future, there were two warring camps. One was the traditional telcos, on whose wires these feral bits of the young Internet were being sent. The telcos argued that the messy protocols of TCP/IP — all this unpredictable routing and those lost packets requiring resending — were a cry for help. What consumers wanted were “intelligent” networks that could (for a price) find the right path and provision the right bandwidth so that transmissions would flow uninterrupted. Only the owners of the networks could put the intelligence in place at the right spots, and thus the Internet would become a value-added service provided by the AT&Ts of the world, much like ISDN before it. The rallying cry was “quality of service” (QoS). Only telcos could offer it, and as soon as consumers demanded it, the telcos would win.

The opposing camp argued for “dumb” networks. Rather than cede control to the telcos to manage the path that bits took, argued its proponents, just treat the networks as dumb pipes and let TCP/IP figure out the routing. So what if you have to resend a few times, or the latency is all over the place. Just keep building more capacity — “overprovision bandwidth” — and it will be Good Enough.

On the underlying Internet itself, Good Enough has won. We stare at the spinning buffering disks on our YouTube videos rather than accept the Faustian bargain of some Comcast/Google QoS bandwidth deal that we would invariably end up paying more for. Aside from some corporate networks, dumb pipes are what the world wants from telcos. The innovation advantages of an open marketplace outweigh the limited performance advantages of a closed system.

But the Web is a different matter. The marketplace has spoken: When it comes to the applications that run on top of the Net, people are starting to choose quality of service. We want TweetDeck to organize our Twitter feeds because it’s more convenient than the Twitter Web page. The Google Maps mobile app on our phone works better in the car than the Google Maps Web site on our laptop. And we’d rather lean back to read books with our Kindle or iPad app than lean forward to peer at our desktop browser.

At the application layer, the open Internet has always been a fiction. It was only because we confused the Web with the Net that we didn’t see it. The rise of machine-to-machine communications — iPhone apps talking to Twitter APIs — is all about control. Every API comes with terms of service, and Twitter, Amazon.com, Google, or any other company can control the use as they will. We are choosing a new form of QoS: custom applications that just work, thanks to cached content and local code. Every time you pick an iPhone app instead of a Web site, you are voting with your finger: A better experience is worth paying for, either in cash or in implicit acceptance of a non-Web standard.
One result of the relative lack of influence of professional salespeople and hucksters — the democratization of marketing, if you will — is that advertising on the Web has not developed in the subtle and crafty and controlling ways it did in other mediums. The ineffectual banner ad, created (indeed by the founders of this magazine) in 1994 — and never much liked by anyone in the marketing world — still remains the foundation of display advertising on the Web.

And then there’s the audience.

At some never-quite-admitted level, the Web audience, however measurable, is nevertheless a fraud. Nearly 60 percent of people find Web sites from search engines, much of which may be driven by SEO, or “search engine optimization” — a new-economy acronym that refers to gaming Google’s algorithm to land top results for hot search terms. In other words, many of these people have been essentially corralled into clicking a random link and may have no idea why they are visiting a particular site — or, indeed, what site they are visiting. They are the exact opposite of a loyal audience, the kind that you might expect, over time, to inculcate with your message.

Web audiences have grown ever larger even as the quality of those audiences has shriveled, leading advertisers to pay less and less to reach them. That, in turn, has meant the rise of junk-shop content providers — like Demand Media — which have determined that the only way to make money online is to spend even less on content than advertisers are willing to pay to advertise against it. This further cheapens online content, makes visitors even less valuable, and continues to diminish the credibility of the medium.

Even in the face of this downward spiral, the despairing have hoped. But then came the recession, and the panic button got pushed. Finally, after years of experimentation, content companies came to a disturbing conclusion: The Web did not work. It would never bring in the bucks. And so they began looking for a new model, one that leveraged the power of the Internet without the value-destroying side effects of the Web. And they found Steve Jobs, who — rumor had it — was working on a new tablet device.

Now, on the technology side, what the Web has lacked in its determination to turn itself into a full-fledged media format is anybody who knew anything about media. Likewise, on the media side, there wasn’t anybody who knew anything about technology. This has been a fundamental and aching disconnect: There was no sublime integration of content and systems, of experience and functionality — no clever, subtle, Machiavellian overarching design able to create that codependent relationship between audience, producer, and marketer.



In the media world, this has taken the form of a shift from ad-supported free content to freemium — free samples as marketing for paid services — with an emphasis on the “premium” part. On the Web, average CPMs (the price of ads per thousand impressions) in key content categories such as news are falling, not rising, because user-generated pages are flooding Facebook and other sites. The assumption had been that once the market matured, big companies would be able to reverse the hollowing-out trend of analog dollars turning into digital pennies. Sadly that hasn’t been the case for most on the Web, and by the looks of it there’s no light at the end of that tunnel. Thus the shift to the app model on rich media platforms like the iPad, where limited free content drives subscription revenue (check out Wired’s cool new iPad app!).

The Web won’t take the sequestering of its commercial space easily. The defenders of the unfettered Web have their hopes set on HTML5 — the latest version of Web-building code that offers applike flexibility — as an open way to satisfy the desire for quality of service. If a standard Web browser can act like an app, offering the sort of clean interface and seamless interactivity that iPad users want, perhaps users will resist the trend to the paid, closed, and proprietary. But the business forces lining up behind closed platforms are big and getting bigger. This is seen by many as a battle for the soul of the digital frontier.

Zittrain argues that the demise of the all-encompassing, wide-open Web is a dangerous thing, a loss of open standards and services that are “generative” — that allow people to find new uses for them. “The prospect of tethered appliances and software as service,” he warns, “permits major regulatory intrusions to be implemented as minor technical adjustments to code or requests to service providers.”

But what is actually emerging is not quite the bleak future of the Internet that Zittrain envisioned. It is only the future of the commercial content side of the digital economy. Ecommerce continues to thrive on the Web, and no company is going to shut its Web site as an information resource. More important, the great virtue of today’s Web is that so much of it is noncommercial. The wide-open Web of peer production, the so-called generative Web where everyone is free to create what they want, continues to thrive, driven by the nonmonetary incentives of expression, attention, reputation, and the like. But the notion of the Web as the ultimate marketplace for digital delivery is now in doubt.

The Internet is the real revolution, as important as electricity; what we do with it is still evolving. As it moved from your desktop to your pocket, the nature of the Net changed. The delirious chaos of the open Web was an adolescent phase subsidized by industrial giants groping their way in a new world. Now they’re doing what industrialists do best — finding choke points. And by the looks of it, we’re loving it.

Editor in chief Chris Anderson (canderson@wired.com) wrote about the new industrial revolution in issue 18.02.
Jobs perfectly fills that void. Other technologists have steered clear of actual media businesses, seeing themselves as renters of systems and third-party facilitators, often deeply wary of any involvement with content. (See, for instance, Google CEO Eric Schmidt’s insistence that his company is not in the content business.) Jobs, on the other hand, built two of the most successful media businesses of the past generation: iTunes, a content distributor, and Pixar, a movie studio. Then, in 2006, with the sale of Pixar to Disney, Jobs becomes the biggest individual shareholder in one of the world’s biggest traditional media conglomerates — indeed much of Jobs’ personal wealth lies in his traditional media holdings.

In fact, Jobs had, through iTunes, aligned himself with traditional media in a way that Google has always resisted. In Google’s open and distributed model, almost anybody can advertise on nearly any site and Google gets a cut — its interests are with the mob. Apple, on the other hand, gets a cut any time anybody buys a movie or song — its interests are aligned with the traditional content providers. (This is, of course, a complicated alignment, because in each deal, Apple has quickly come to dominate the relationship.)

So it’s not shocking that Jobs’ iPad-enabled vision of media’s future looks more like media’s past. In this scenario, Jobs is a mogul straight out of the studio system. While Google may have controlled traffic and sales, Apple controls the content itself. Indeed, it retains absolute approval rights over all third-party applications. Apple controls the look and feel and experience. And, what’s more, it controls both the content-delivery system (iTunes) and the devices (iPods, iPhones, and iPads) through which that content is consumed.

Since the dawn of the commercial Web, technology has eclipsed content. The new business model is to try to let the content — the product, as it were — eclipse the technology. Jobs and Zuckerberg are trying to do this like old-media moguls, fine-tuning all aspects of their product, providing a more designed, directed, and polished experience. The rising breed of exciting Internet services — like Spotify, the hotly anticipated streaming music service; and Netflix, which lets users stream movies directly to their computer screens, Blu-ray players, or Xbox 360s — also pull us back from the Web. We are returning to a world that already exists — one in which we chase the transformative effects of music and film instead of our brief (relatively speaking) flirtation with the transformative effects of the Web.

After a long trip, we may be coming home.

Michael Wolff (michael@burnrate.com) is a new contributing editor for Wired. He is also a columnist for Vanity Fair and the founder of Newser, a news-aggregation site.

An earlier version of the chart at the beginning of this article incorrectly listed the time span from 1995 to 2005. The correct time span is 1990 to 2010. The correct version appears in the print magazine.

Google unveils key update on searches

By Richard Waters in San Francisco

Published: September 9 2010 00:50 | Last updated: September 9 2010 00:50

Google has unveiled changes to the way it presents search results in what it described as one of the most significant updates in its 12-year history.

The new approach is intended to help users find results more quickly, though some search experts said that indirect changes to how users conduct their searches could also have a wider impact on the many businesses that advertise on Google or rely on traffic from the search engine.

The new feature, called Google Instant, displays full search results as users type in queries, without waiting for them to finish typing or to hit “enter”. “It’s searching before you type – we’re predicting what query you’re likely to do and giving you results for that,” said Marissa Mayer, Google’s head of search products and user experience.

The approach should shave two to five seconds off the average search, Ms Mayer said.

Sergey Brin, co-founder, said the technological advances that had contributed to the new feature highlighted “a little bit of a new dawn in computing”, as companies such as Google, Apple and Amazon experiment with new user interfaces to make it easier to find and use information.

Google said it could not yet determine how far Google Instant would change search behaviour, but some analysts said the impact of the launch could reverberate through the online economy that has built up around the Google search service. “It’s potentially enormously significant,” said Greg Sterling, a US search engine analyst. “Anything that changes the way people interact with search results will affect the many businesses that rely on search.”

He and other analysts said that search users could be drawn to the top results that Google returns as they type their queries, giving extra prominence to companies whose websites come out high in search results. By putting greater emphasis on the top results, the change could have important implications for any business that uses so-called search engine optimisation to try to gain prominence in search results, Mr Sterling said.

Google executives said the new feature should not change the search results that users eventually click on, since the underlying relevance algorithms used to determine the order in which results are shown had not changed.

Danny Sullivan, editor of Search Engine Land, an industry website, said that while the changes might have a marginal impact, suggestions that they would undermine current search engine optimisation practices appeared overstated.

Some analysts also predicted that the Google Instant would change the way that search engine users interact with advertising, since adverts will also appear linked to Google’s predictions about what a user is interested in.

Copyright The Financial Times Limited 2010. Print a single copy of this article for personal use. Contact us if you wish to print more to distribute to others.

Friday, August 06, 2010

Editor’s note: A digital route to the past

Editor’s note: A digital route to the past
By Peter Whitehead, Digital Business editor

Published: June 16 2010 01:02 | Last updated: June 16 2010 01:02

My teenage daughters were recently given an idea of what “smart metering” could mean to them. Smart meters will soon be in many homes, showing inhabitants how much energy they are using and encouraging them to consume less.

The girls were surprised to see what a difference just switching on a kettle could make. But the technology they were seeing for the first time is many decades old – a coin-operated electricity meter, with the wheel that indicates power usage in a highly visible spot in the kitchen of a holiday apartment.

For the first time, they showed an interest in what was switched on and began to see a connection between energy use and the dwindling pile of coins on the window sill.

Smart meters will make this relationship far clearer and more user-friendly. But in essence, this amazing digital technology is merely recreating a connection that will be very familiar to anyone who has ever fed real money into a meter.

Of course, smart meters will also enable new services to emerge, such as flexible payments, whereas the benefits of another digital technology – DAB radio – over traditional analogue are far harder to identify.

I invested in an expensive Pure DAB radio three years ago and still enjoy many of its advanced features, such as the ability to record programmes. Less enjoyable is its insatiable demand for batteries – it has become so costly to feed that it now has to remain attached to the mains in the bedroom.

It also suffers from “all-or-nothing” tuning, so that only stations with strong signals can be received. This is a digital technology that has much to prove before it can be deemed fit to replace existing analogue services.

Digital technology is certainly changing the world – but perhaps not always as fundamentally or as effectively as we might think.

Copyright The Financial Times Limited 2010. Print a single copy of this article for personal use. Contact us if you wish to print more to distribute to others.