…is in release 1.0 (blogpost) and available here. Haven’t checked it out yet, but I will – collaborative spreadsheet authoring would seem a great tool for research use, and perhaps also for making data available for public analysis.
Cringely speculates about what Google is going to do with all those data centers, and how to counter the strategy he thinks they are following by creating an intra-ISP P2P network. His description of why Microsoft won’t compete with Google even though they can is a pretty good example of disruptiveness in action.
Excellent presentation by Jeff Bezos on Amazon’s three new services: S3 (cloud of storage), Mechanical Turk (cloud of humans, sort of a small-task Google Answers), and Elastic Compute Cloud (cloud of processing). Bezos is an unapologetic geek – then again, in that crowd, he can be. Fun.
Here is my comment as I posted it:
I found May’s utterances utterly superficial and very old-fashioned. First of all, though not many CEOs come from the post as CIO, many top executives have had a stint in the IT part of the organization, learning about what the technology does and how to think about the company as a value producing system. As for the use of Peters and Waterman’s book as a sort of criteria for what constitutes a good CEO, that is laughable – the book refers to companies, not CEOs. And it has been utterly debunked many times.
May’s article would have been right on about 10 years ago. Back then, the reason CIOs did not make it into the top spot (in fact, 40% of CIOs were fired, and the average tenure was about 2.3 years) was because they did not understand that the skills that got you into the top IT position were not the skills that would keep you there.
In CSC, we conducted a survey of CIOs in the top 1000 corporations in the world, and got a surprising result: Of the 40% of CIOs that were fired, only a few were fired for "not producing cost-effective IT". The rest went because they could not communicate with the top management group, were ineffective change agents, or could not contribute to business strategy development. In other words, when you are a CIO in a large corporation, you are no longer the IT organization’s representative into top management, but the other way around. If you cannot make the transition to thinking about the whole company as a system, you are toast.
There are actually quite a few companies where technical competence is visible in top management. Royal Bank of Scotland comes to mind, with one of the most effective IT strategies in the world (a central "manufacturing division" handles transactions, IT, HR, and call centers, leaving the branch network – and they have more than 50 brands – to serve customers and grow the business. UPS, Wal-Mart, Amex, Wells Fargo, Royal Bank of Canada, some of the large airlines, Dell, quite a number of telcos, some insurance companies (I particularly like a German one called AMB which runs many insurance companies off the same systems) and many others have top management that understand the impact of IT and think of the whole company as a system. Does that mean that they come from the position as CIO? Not necessarily, though some of them do.
Being a CIO is about information, not IT. For that, you have a CTO or a CIO office that handles the technical pieces. Most of the top CIOs I know worry about the customers and the customer experience. One CIO of a large hotel chain worries about the speed of the in-room Internet connection – and whether the ventilation is good enough that you can shave after taking a shower without the mirror getting fogged up.
IT is a tool. It is an important tool, but it is what it does for the customer that matters. And the role of the IT organization is not to make IT elegant – it is to make business elegant. If the tools happen to be boring and the CIO not very visible – so what?
The relational database model, initially formulated by Codd in the late 70s, is the dominant way to store structured data at present. However, queries to a relational database are fairly slow compared to complex queries towards a search engine index, which (in addition to the poor user interface of relational databases, often mapped directly to the table structure) has meant that search engines are now competitor for the job of extracting subsets from structured data. This, as well as the difficulty of mapping object structures onto a relational database leads me to wonder if not only the query interface, but in fact the whole concept of the relational database may soon face a disruptive threat from simpler structures.
A conversation at a Concours brainstorming session set me dusting off vaguely remembered concepts such as hierarchical and network databases, as well as the old difficulty of storing a multi-component concept with class attributes in a relational database (once characterized by the Economist as trying to store a car by putting the wheels under "W" and the engine under "E"). I found an good introductory text to the associative model of data by Simon Williams here (registration required), which also sums up the differences between the various data models pretty well.
Long-term, systems will be simulations of the reality they are to manage, but realities will also be influenced by the differences in processing capacity between humans and machines. A great example of the latter is the concept of random pick, which is well explained by Chris Anderson’s post about the shoe company Zappo’s, which stores shoes randomly (with each pair’s UPC scanned) as they come in. The result is a less optimized pick, but the total effort is less than sorting pairs coming in. Another effect, of course, is that they spend the least amount of effort on those products that move the least. (Though I love the comment about whether they occassionally have to defragment the warehouse.)
My apologies for this nerdy detour, I am writing a paper on, among other things, search technology, as well as participating in an interesting Concours project on Enterprise Architecture.
With that, back to our regular programming…
Read more about this technology and the company behind it here.
Just went off the phone from a Concours teleconference with Tom Davenport, Bob Morison and about 50 other participants. The topic was Competing on analytics, which was also the title of a Harvard Business Review article he wrote in January last year, and which forms the basis for a book of the same title coming out soon. Concours is launching a membership program called the Business Analytics Concours (glossy brochure here). The upshot is a revivial and refocus on business analytics. Following a number of examples of companies (in particular the Harrah Casino) that have had success due to their ability to relentlessly analyze and optimize their performance, value offerings and market opportunities.
Tom is an interesting character and a prolific writer on knowledge management, process optimization and knowledge worker productivity. He outlined how companies that compete on analytics tend to share certain attributes, such as senior management advocacy, an enterprise approach to analysis (rather than letting the thousand analysts bloom), going first deep and then broad, and paying attention to the development of a strong analytical capability.
My role was to comment and to discuss the technology side: While everyone recognizes that competing on analytics is a question of culture, understanding of the business environment and analytical skill and drive, there is a technology side to it as well. What kind of technologies can enable analysis and optimization, what emerging technologies should IT be monitoring and experimenting with in this space, and how would an enterprise architecture for a company with an analytical bent look different from most companies’ architectures today?
Here are my notes:
- the obvious technologies needed are repositories for data, such as data warehouses and datamarts, as well as business intelligence software for analyzing it
- on a more abstract level, we need technologies that allow for rapid collection (most data is out there in digital form, but it needs to be made analyzeable), structuring (hopefully avoiding human intervention in the data cleanup phase, which is costly) and analysis of a wide range of data (which very often turns into experimentation)
- in particular, we need technologies that let peple develop models from operational data, and redo structuring and categorization in a dynamic and shared way (did anyone say wiki?)
- a short-term path to better analytics may be search technology, which gives access to unstructured data, allows joining of many sources, and does not require rearchitecting and a massive job of initial categorization and structuring
- a sizeable investment in data preparation will kill the analytical impulse in its birth
- long-term, there are interesting possibilities in the kind of data exchange protocols visualized by Van Jacobson, i.e. a form of networks that makes data location irrelevant and pathways hidden
- lastly: We have to realize that this is a cultural, strategic and managerial issue, and that almost any technology can be used in an analytical way. If you are not inclined to analyze your environment, no amount of technology is going to make you do it. In fact, more technology can distance you from the real world, and make you give in to the temptation of letting people have pre-saved spreadsheets and fixed models rather than the ability to analyse
- an ideal would be to have experimental facilities, where things could be sim’ed out, complete with a button labeled "Make it happen".
This looks like an interesting project, because it goes right to the heart of what companies must get better at in a world where information spreads rapidly, imitation is easy, and you compete on your evolving optimization and innovation capabilty rather than invididual technologies or services.