Monthly Archives: January 2007

Google as super-Akamai

Cringely speculates about what Google is going to do with all those data centers, and how to counter the strategy he thinks they are following by creating an intra-ISP P2P network. His description of why Microsoft won’t compete with Google even though they can is a pretty good example of disruptiveness in action.

Why don’t CIOs become CEOs?

Interesting thread at Slashdot, based on a vacuous article about why CIOs don’t become CEOs.

Here is my comment as I posted it:

I found May’s utterances utterly superficial and very old-fashioned. First of all, though not many CEOs come from the post as CIO, many top executives have had a stint in the IT part of the organization, learning about what the technology does and how to think about the company as a value producing system. As for the use of Peters and Waterman’s book as a sort of criteria for what constitutes a good CEO, that is laughable – the book refers to companies, not CEOs. And it has been utterly debunked many times.

May’s article would have been right on about 10 years ago. Back then, the reason CIOs did not make it into the top spot (in fact, 40% of CIOs were fired, and the average tenure was about 2.3 years) was because they did not understand that the skills that got you into the top IT position were not the skills that would keep you there.

In CSC, we conducted a survey of CIOs in the top 1000 corporations in the world, and got a surprising result: Of the 40% of CIOs that were fired, only a few were fired for "not producing cost-effective IT". The rest went because they could not communicate with the top management group, were ineffective change agents, or could not contribute to business strategy development. In other words, when you are a CIO in a large corporation, you are no longer the IT organization’s representative into top management, but the other way around. If you cannot make the transition to thinking about the whole company as a system, you are toast.

There are actually quite a few companies where technical competence is visible in top management. Royal Bank of Scotland comes to mind, with one of the most effective IT strategies in the world (a central "manufacturing division" handles transactions, IT, HR, and call centers, leaving the branch network – and they have more than 50 brands – to serve customers and grow the business. UPS, Wal-Mart, Amex, Wells Fargo, Royal Bank of Canada, some of the large airlines, Dell, quite a number of telcos, some insurance companies (I particularly like a German one called AMB which runs many insurance companies off the same systems) and many others have top management that understand the impact of IT and think of the whole company as a system. Does that mean that they come from the position as CIO? Not necessarily, though some of them do.

Being a CIO is about information, not IT. For that, you have a CTO or a CIO office that handles the technical pieces. Most of the top CIOs I know worry about the customers and the customer experience. One CIO of a large hotel chain worries about the speed of the in-room Internet connection – and whether the ventilation is good enough that you can shave after taking a shower without the mirror getting fogged up.

IT is a tool. It is an important tool, but it is what it does for the customer that matters. And the role of the IT organization is not to make IT elegant – it is to make business elegant. If the tools happen to be boring and the CIO not very visible – so what?

The simpler database

The relational database model, initially formulated by Codd in the late 70s, is the dominant way to store structured data at present. However, queries to a relational database are fairly slow compared to complex queries towards a search engine index, which (in addition to the poor user interface of relational databases, often mapped directly to the table structure) has meant that search engines are now competitor for the job of extracting subsets from structured data. This, as well as the difficulty of mapping object structures onto a relational database leads me to wonder if not only the query interface, but in fact the whole concept of the relational database may soon face a disruptive threat from simpler structures.

A conversation at a Concours brainstorming session set me dusting off vaguely remembered concepts such as hierarchical and network databases, as well as the old difficulty of storing a multi-component concept with class attributes in a relational database (once characterized by the Economist as trying to store a car by putting the wheels under "W" and the engine under "E"). I found an good introductory text to the associative model of data by Simon Williams here (registration required), which also sums up the differences between the various data models pretty well.

Long-term, systems will be simulations of the reality they are to manage, but realities will also be influenced by the differences in processing capacity between humans and machines. A great example of the latter is the concept of random pick, which is well explained by Chris Anderson’s post about the shoe company  Zappo’s, which stores shoes randomly (with each pair’s UPC scanned) as they come in. The result is a less optimized pick, but the total effort is less than sorting pairs coming in. Another effect, of course, is that they spend the least amount of effort on those products that move the least. (Though I love the comment about whether they occassionally have to defragment the warehouse.)

My apologies for this nerdy detour, I am writing a paper on, among other things, search technology, as well as participating in an interesting Concours project on Enterprise Architecture.

With that, back to our regular programming…

Competing on Analytics

Just went off the phone from a Concours teleconference with Tom Davenport, Bob Morison and about 50 other participants. The topic was Competing on analytics, which was also the title of a Harvard Business Review article he wrote in January last year, and which forms the basis for a book of the same title coming out soon. Concours is launching a membership program called the Business Analytics Concours (glossy brochure here). The upshot is a revivial and refocus on business analytics. Following a number of examples of companies (in particular the Harrah Casino) that have had success due to their ability to relentlessly analyze and optimize their performance, value offerings and market opportunities.

Tom is an interesting character and a prolific writer on knowledge management, process optimization and knowledge worker productivity.  He outlined how companies that compete on analytics tend to share certain attributes, such as senior management advocacy, an enterprise approach to analysis (rather than letting the thousand analysts bloom), going first deep and then broad, and paying attention to the development of a strong analytical capability.

My role was to comment and to discuss the technology side: While everyone recognizes that competing on analytics is a question of culture, understanding of the business environment and analytical skill and drive, there is a technology side to it as well. What kind of technologies can enable analysis and optimization, what emerging technologies should IT be monitoring and experimenting with in this space, and how would an enterprise architecture for a company with an analytical bent look different from most companies’ architectures today?

Here are my notes:

  • the obvious technologies needed are repositories for data, such as data warehouses and datamarts, as well as business intelligence software for analyzing it
  • on a more abstract level, we need technologies that allow for rapid collection (most data is out there in digital form, but it needs to be made analyzeable), structuring (hopefully avoiding human intervention in the data cleanup phase, which is costly) and analysis of a wide range of data (which very often turns into experimentation)
  • in particular, we need technologies that let peple develop models from operational data, and redo structuring and categorization in a dynamic and shared way (did anyone say wiki?)
  • a short-term path to better analytics may be search technology, which gives access to unstructured data, allows joining of many sources, and does not require rearchitecting and a massive job of initial categorization and structuring
  • a sizeable investment in data preparation will kill the analytical impulse in its birth
  • long-term, there are interesting possibilities in the kind of data exchange protocols visualized by Van Jacobson, i.e. a form of networks that makes data location irrelevant and pathways hidden 
  • lastly: We have to realize that this is a cultural, strategic and managerial issue, and that almost any technology can be used in an analytical way. If you are not inclined to analyze your environment, no amount of technology is going to make you do it. In fact, more technology can distance you from the real world, and make you give in to the temptation of letting people have pre-saved spreadsheets and fixed models rather than the ability to analyse
  • an ideal would be to have experimental facilities, where things could be sim’ed out, complete with a button labeled "Make it happen".

This looks like an interesting project, because it goes right to the heart of what companies must get better at in a world where information spreads rapidly, imitation is easy, and you compete on your evolving optimization and innovation capabilty rather than invididual technologies or services.

Jacobson on the data overlay web

Van Jakobson gives a great talk at Google about the need to create the next generation web, this time in the form of an overarching data exchange protocol. His argument is that we should do to the Internet what the Internet did to proprietary networks: Overlay it with a higher degree of abstraction. This time, it is the data, where data self-authenticates and you can use any distribution mechanism, any kind of underlying network. http://video.google.com/googleplayer.swf?docId=-6972678839686672840&hl=en
(If in-post video doesn’t work, try this link.)

Great history of the web, excellent abstraction. Here is his concluding slide:

  • IP rescued us from plumbing at the wire level but we still have to do it at the data level. A dissemination based architecture would fix this.
  • Many ad-hoc dissemination overlays have been created (Akamai CDN, BitTorrent, Sonos mesh, Apple Rendevous) – there’s a demonstrated need.
  • If we are going to have a future, we should rescue some grad students from re-inventing the past.

 (Via Paul Kedrosky, who notes a wonderful little anecdote about Copernicus and his early predictions from the heliocentric system. Not a bad example of a disruptive technology, as it were.)

The Historian and the Wikipedia

Excellent article by Roy Rosenzweig on Wikipedia and history. Very good discussion of the importance of synthetic writing in history, and to what extent the Wikipedia model can provide it. This is at the heart of the coming discussion of whether textbooks and other material for traditional learning can be created through social production.

(Via Paal Lykkja

Words to live by

Since it is Friday, the beginning of a new year (and I need the quotation to answer a charge of being a technological optimist (guilty!)): Here are the concluding paragraphs of David S. Landes incomparable The Wealth and Poverty of Nations (Abacus, 1998):

In this world the optimists have it, not because they are always right, but because they are positive.  Even when wrong, they are positive, and that is the way of achievement, correction, improvement, and success.  Educated, eyes-open optimism pays; pessimism can only offer the empty consolation of being right.

The one lesson that emerges is the need to keep trying.  No miracles.  No perfection.  No millennium.  No apocalypse.  We must cultivate a skeptical faith, avoid dogma, listen and watch well, try to clarify and define ends, the better to choose means.

Free software for almost anything

Hard to think of anything else you would need when you are done downloading this list. I am going for note-taking software first, Firefox and Thunderbird are already my favorites, and I have used Audacity and a couple of others. What a collection. Anyone with experience of Clamwin? It can’t possible consumer more resources than F-Secure, which is what I currently have installed.

(Via Marginal Revolution

Disruptive titling

InformationWEEK is an IT magazine that sometimes displays astonishing ability to not get it. This article is a case in point. Not so much the article – it is basically a description of five new "hot" technologies that in my view are, at best, lukewarm – but the title.

To make it clear: A technology is not disruptive because it is new. It is not disruptive because it might displace the currenlty used technology. It is not disruptive because it comes suddenly on the market.

A technology is disruptive if it replaces an old technology by adressing an unmet need in the market in such a way that the incumbent technology cannot compete because doing so would invalidate the business model of the incumbent technology. This is normally because the new, disruptive technology is worse than the old technology (according to the old measures), because the most valuable customers don’t want it, and because it would be less profitable for the incumbent companies to offer it.

None of the five technologies listed here qualifies according that those criteria. In fairness to the writer, David Strom: He doesn’t use the term "disruptive" anywhere in the text. That moniker has, I assume, been slapped on by some clueless editor with an urge to use fancy words. Too bad. But not the first time for InformationWEEK.