Monthly Archives: April 2005

Adams for inspiration

I am heads-down in various writing projects at the moment, but needed a break for inspiration and found it listening to Douglas Adams’ talk about “Parrots, the Universe and Everything” at UCSB in 2001 (tip by Eirik.)

Aside from the wonders of presentation technique – the only aids used here are two snippets of text he reads to the audience – Adams is fascinating in the way he allows himself to use language and stories to strengthen his points. Notice how he repeats himself (generally in threes) and uses old-time humor (a riotous account of trying to buy condoms (to wrap around a microphone so they could record underwater) in Shanghai) as well as stories of ultra-eccentric zoologists and downright weird animals.

I thought the most interesting idea, however, was his point that science is changing – that we no longer (at least not to the same degree) take things apart to understand them but instead put them together so that we can watch them interact. Shades of Stephen Wolfram – computers allow us to to study the effect of repeated patters – and Mitchel Resnick – we relentlessly strive to discover intentionality in interactions and self-organizing systems. Life is no longer a mystery (since we have sequenced the human genome) – just a process of information (now we have to figure out what the code means.) Also interesting was his observation that we now have a “pause in evolution” by creating a buffer around us of medicine and lack of exposure to the environment – though I would disagree with the word “pause” and rather say “change”.

Nice quote: “We don’t need to save the world. The world is fine. It has been through at least five periods of massive extinction before. [..] The world is big enough to look after itself.” Though he is less specific on what to do about saving the world for human habitation.

Why simulations?

At Concours, I am currently involved in a project on tools and techniques for business experimentation.
One experimental tool is simulation – and I think business simulation will be increasingly important in business training and perhaps strategy formulation, as it already is in medicine (from Boingboing).
As for justification – note the phrase “Every group overdosed the patient”….

Smart from TV

It’s official – TV makes us smarter. Some more precision would be good here – I am thinking IQ points for jump shots, time jumps, number of un- and interrelated plot lines and so on. Then, a mental nutrition pyramid and a point scale per program. Mandatory plot complexity governed by the authorities. “Have you seen your show today?”
E-mail, on the other hand, makes you less smart.
So that’s why. I don’t watch enough television and I read to much mail. I knew there was a reason….

Reflections on Ghoshal

Just finished reading Sumatra Ghoshal’s posthumously published article “Bad Management Theories are Destroying Good Management Practices” (Academy of Managing Learning & Education, vol 4 no 1, pp. 75-91). I thought it brilliant, and was surprised when The Economist misunderstood it and started chewing him up for blaming Enron, Tyco and Global Crossing on bad MBA teaching.
The interesting conclusion in Ghoshal’s article is not in the topicality, but in the principle – his questioning of the dictum that maximizing shareholder returns is the sine qua non. He attributes the overemphasis on this to the fact that the organizational economics perspective has clear and coherent models – and the counterarguments are verbal and less precise. His argument incorporates risk, arguing that other stakeholders (such as employees in the short term and the society in the long term) carries more risk with their involvement in an enterprise than do the shareholders, who can get out at relatively short notice. (And a side note: I like the use of Elster’s framework of theoretical models a lot.)
When I took one of Michael Jensen’s courses on organizational economics at Harvard, ages ago, this was the one “terse” and economical argument against the organizational economics theories (which are extremely appealing from a purely intellectual viewpoint). One of the cases discussed was that of Safeway, which had undergone a cost-cutting and downsizing excercise, in order for the owners to extract value. It seemed to me that what was happening was that many people who lost their jobs really were exposed to the rough and tumble of market economics without being prepared for it – and that society somehow has an obligation to prepare people, or at least make them aware, of the risks of a market based system. Landes makes a similar argument (admittedly only in a footnote in Wealth and Powerty of Nations) when he talks about “[t]he contest [..] between lowbrow vested interests on the one hand, highbrow economic reasoning on the other.” (p. 266)
One way to take this argument forward might be to somehow weight the influence of stakeholder in a company based on the amount of risk they take. Though hard to do in practice, the many laws and regulations of worker participation in Europe have this effect, though I suspect that they are not underpinned by that theoretical viewpoint (and that they increase systemic risk, that is, undermine the competitiveness of the region they cover). A basis in risk adjustment would at least meet the conciseness criteria, since it probably could be shaped into a coherent theory. And there is the paradox of perverse incentives – the more risk you take, the more influence you gain.
Hmmmm… Have to think more about this – particularly as there is a new theory of the firm needed for knowledge based companies operating in a global setting, enabled by information technology.

Congratulations!

In the category “things I should have written about ages ago but somehow forgot”, I have the huge pleasure of reporting that Bob Morison and Tammy Erickson, current and long-time colleagues from The Concours Group and in other ways excellent people, have been awarded the 2004 McKinsey Award for one of the best articles in the Harvard Business Review. The article, It’s time to retire retirement, is the result of a Concours research project called Demography is De$stiny (management summary), done in collaboration with Ken Dychtwald and his company AgeWave.
The project studied the implications of the rather alarming demographic evolution of the USA and Europe. The average American and European is getting older, and, as has been heavily debated in many newspaper articles, there is a problem in finding enough people who can work to finance social welfare system we have come to depend on. This is not news, but very few companies are acting on it – and the article points to the need for companies to stop pushing out old people and instead find innovative ways of making use of their capabilities as the workforce ages.
Incidentally – I think I can take credit for suggesting (but not inventing) the boiled frog analogy.)

Book to get: Enron

The Economist has a review of Kurt Eichenwald’s new book on Enron: Conspiracy of Fools: A True Story, finding that top management in Enron were duped by some of their direct reports. It reads like a thriller.
I will eventually get this book – you learn more from corporate failures than success stories. For a taste – which also reads like a thriller – try the Powers report. It was the original report by the Board committee looking into the causes of Enron’s failure, and is a masterpiece in terse understatement.

Prototypes vs. simulations

What really is the difference between a prototype and a simulation? This discussion came up in a teleconference recently, over the Concours research project TBE: Tools and Techniques for Business Experimentation.
Intuitively, you might say that a prototype is something physical, and a simulation is done in software or acted out in some way. You might say that a prototype is built to show a concept and a simulation is done to investigate the relationships between a concept and its environment. Or you might say that a prototype should be as close to the working end results as possible, whereas a simulation only shows the effects of it, not caring what is behind the curtain.
The trouble with these definitions is that both “prototype” and “simulations” can be used for a variety of purposes. To a certain extent they are defined not by their content but by their use (or even by what environment they are in.)
Classifying techniques and uses of them is useful, but sometimes I wonder if we need to be precise. Back in the stone age of computing there were all kinds of more-or-less well defined classifications of computers, for instance into mainframe, mini and micro. First these were based on technical specifications, such as CPU speed or memory size. Then one tried price. Then you got the light-hearted but surprisingly correct classifications of “a micro is a computer you can throw, a mini is one you can topple, and a mainframe is one you can crawl into” which weren’t too far off the mark. At least for a while, until the technological evolution made the whole classification scheme useless.
Similarly, classifying things as software or hardware becomes more difficult, since they to a certain extent are substitutes. You get finer distinctions such as firmware, and jokey extenstions such as wetware. I am increasingly fond of the saying that

hardware is something which, if you fiddle with it long enough, breaks, whereas software is something which, if you fiddle with it long enough, works.
So, perhaps we should differentiate between prototypes and simulations by saying that
a prototype is something you build to see whether something will work, and a simulation is something you build to see whether something will break….
Oh well, just a thought. And it is still fairly early in the morning.