Author Archives: Espen

Unknown's avatar

About Espen

For details, see www.espen.com.

D-Day from the middle

D-Day: The Battle for Normandy D-Day: The Battle for Normandy by Antony Beevor

My rating: 4 of 5 stars After having read a number of Steven Ambrose’s books on the battle for Normandy, Anthony Beevor’s version is a relief in that it has much cooler analysis, more maps (which every book on warfare should have more of) and manages to include the German, Canadian, Polish and French side of the equation to a much larger extent. (for instance, he points out that more French civilians died as a result of the war in Normandy, particularly the bombing and shelling, than died during the blitz in London).

Beevor is somewhere between Ambrose (who provides much more detail on the experience of the individual soldiers, particularly infantry) and Liddell Hart and Keegan, who take a more professional, tactical and strategic view. The balance is good. However, the book adds little new knowledge, as far as I can tell, aside from more detail on the rivalry between the various commanders, as well as a good account of the liberation of Paris, with all the political machinations and posturing that went on before it.

Beevor is sharply critical of Montgomery, seeing his egocentric posturing and lack of imagination as a diplomatic and political failure as well as tactically costly. He does point out, however, that Montgomery was facing a more heavily defended part of the front, except at the beachhead. Beevor is also critical of the use of bombers as infantry support, and points out numerous tactical and strategic errors which cost lives and time. In all, most generals seem to make more errors than good decisions – which, I suppose, is primarily an effect of having to take decisions all the time, with imperfect knowledge.

The book manages to give an impression of both the large and the small view of war, and points out how the slaughter in Normandy spared the rest of France a protracted war. For that reason, if you are going to read just one book on D-Day, this is probably it.

View all my reviews >>

GRA6821/GRA6825: First, introductory lecture

In the first lecture, we will discuss what technology is, how it evolves, and what it means to have a technology strategy. For the first lecture, please read and be ready to discuss the following articles (articles in Blackboard unless otherwise noted):

For those who want to plow a bit deeper, read Neal Stephenson’s brilliant essay In the beginning…was the command line and see this video. Actually, try to do that, all of you.

Here are a few questions to get you thinking:

  1. What are Malone & Rockart’s key arguments? To what extent were they right about how information technology would influence corporations? What did they get wrong?
  2. Which parts of Microsoft’s strategy worked — and which didn’t? Imagine you were an interested technology investor in January 1984: Would you have invested in Microsoft based on this article and the company’s strategy?
  3. Why is technology understanding important for general managers? Why is it not?  How much do you need to know about technology to manage a technology-based organization?
  4. What does it mean when we use the term "an information economy"?
  5. (for those diving into Stephenson) Which technologies are currently in the technosphere, which are on their way out, which are coming in? How would you know where a technology is?

And here are two assignments I would like to you do before class starts:

  1. Visit this page, and set yourself up for the Wikipedia assignment, which will go throughout the course.
  2. Sign up for Twitter, follow @espenandersen, and look out for #gra6821 (and maybe #gra6825)

Stringing those dimensions together…

This video tries to do something very difficult: Explain dimensions beyond the four we are used to. And does a good job of it.

(And to my students – watch this video after having read Neal Stephenson’s In the beginning was … the command line, as an introduction to the course on technology strategy.)

(Via Cory)

Risky analysis

Bruce Sterling Schneier has a good article on the dangers of risk analysis when estimating software projects – and, by extension, estimating the risk of terrorist attacks.

It is the everyday risks that kill you – largely because the effect is delayed and the risk itself not very visible. I seem to remember someone proposing that the way to get responsible driving would be not to increase the safety level of the car, but instead decrease it – for instance by outlawing seat belts and mandating a four inch sharp metal spike placed in the middle of the steering wheel.

If too much imagination can make us overly risk-averse, a heavy dose of reality might have the opposite effect.

Ode to joy, in a poetic way

The Ode Less Travelled: Unlocking the Poet Within The Ode Less Travelled: Unlocking the Poet Within by Stephen Fry

My review

rating: 5 of 5 stars
Anything Stephen Fry writes is bound to be a joyous experience, but this one has to rank among his best (possibly only beaten by his autobiography and "The Hippopotamus", my absolute favorite.)

In this peon to poetry, Stephen Fry shows the rules and rhythms of how to construct a poem, allowing you to see the many intriguing details and quite possibly get on with writing some yourself. I knew about trochees and jambs and so on, but had no idea about the villanelle, for instance – an intriguing and rhythmic poetic form.

Stephen Fry has a loving relationship to language, and manages to convey both his feelings and knowledge about it. Highly recommended if you like to read already and would like to read or possible write some poetry with a likable, humorous and extremely knowledgeable advisor at your side.

View all my reviews.

Soul food it isn’t

My American experience is complete, I have now eaten (about half of) a fried Twinkie:

01062009036 

Actually, my first Twinkie ever, and I am never ever going to touch one again. Promise.

(And if you want to know what Twinkies contain, here is an experiment to find out.)

Plagiarism showcased – and a call for action

image I hate plagiarism, partially because it has happened to me, partially because I publish way too little because I overly self-criticize for lack of original thinking, partly because I have had it happen with quite a few students and am getting more and more tired of having to explain even to executive students with serious job experience that clipping somebody else’s text and showing it as your own is not permissible – this year, I even had a student copy things out of Wikipedia and argue that it wasn’t plagiarism because Wikipedia is not copyrighted.

I suspect plagiarism is a bigger problem than we think. The most recent spat is noted in Boing Boing – read the comments if you want a good laugh and some serious discussion. (My observation, not particularly original: Even if this thing wasn’t plagiarized, isn’t this rather thin for a doctoral dissertation?)

The thing is, plagiarism will come back to bite you, and with the search tools out there, I can see a point in a not too distant future where all academic articles ever published will be fed into a plagiarism checker, with very interesting results. Quite a few careers will end, no doubt after much huffing and puffing. Johannes Gehrke and friends at Cornell have already done work on this for computer science articles – I just can’t wait to see what will come out of tools like these when they really get cranking. I seem to remember Johannes as saying that most people don’t plagiarize, but that a few seem to do it quite a lot.

It is high time we turn the student control protocols loose on published academic work as well. Nothing like a many eyeballs to dig out that shallowness….

A wave of Google

This presentation from the Google I/O conference is an 80-minute demonstration of a really interesting collaborative tool that very successfully blends the look and feel of regular tools (email, Twitter) with the embeddedness and immediacy of Wikis and share documents. I am quite excited about this and hope it makes it out in the consumer space and does not just rest inside single organizations – collaborative spaces can create a world of many walled gardens, and being a person that works as much between organizations as in them.

Google wave really shows the power of centralized processing and storage. Here are some things I noted and liked:

  • immediate updating (broadcast) to all clients, keystroke by keystroke
  • embedded, fully editable information objects
  • history awareness (playback interactions)
  • central storage and broadcast means you can edit information objects and have the changes reflect back to previous views, which gives a pretty good indication that the architecture of this system is a tape of interactions played forward
  • concurrent collaborative editing (I want this! No more refreshes!)
  • cool extensions, such as a context-aware spell checker, an immediate link creator, concurrent searcher
  • programs are seen as participants much like humans
  • easy developer model, all you need to do is edit objects and store them back
  • client-side and server-side API
  • interactions with outside systems

I can see some strategic drivers behind this: Google is very much threatened by walled gardens such as Facebook, and this could be a great way of breaking that open (remember, programs go from applications to platforms to protocols, and this is a platform built over OpenSocial, which jams open walled gardens). This could just perhaps be what I need to be able to more effectively work over several organizations. Just can’t wait to try this out when it finally arrives.

From surfing the net to surfing the waves….

Update: Here is the Google Blog entry describing Wave from Lars Rasmussen.

Fixing the US economy

Since everyone else has an opinion on this, I’ll make it brief: Three ways to vastly improve the US economy:

  • Federal gas tax. The fuel-efficiency rules recently put in place will spawn lots of innovation into ways to get around them (engine upgrade kits, anyone?). A federal gas tax is easy to apply, forces all automakers to do something with their engines, reduces the demand for transportation (hence, stimulates local production) and reduces dependence on foreign oil and foreign loans.
  • Federal calorie tax, to apply not just to sugar-sweetened drinks (again, something that encourages all kinds of fiddling), but to any high-caloric, non-nutritional food substance, including high-fructose corn syrup. America is dangerously overweight, and one reason is that good food is expensive and junk food is not only cheap, but in many cases subsidized. Taxing to reduce the consumption of obviously bad and unnecessary stuff makes all kinds of sense. I am less certain whether it makes sense to subsidize the good stuff – too much bureaucracy, and too many discussions.
  • Encourage house-buying immigrants. Granting VISAs to a million families or two, provided they buy a house, should be a much needed shot in the arm. The USA is not even close to being overpopulated, and a fresh new crop of resourceful immigrants is just what the doctor might order. Get to it.

There, that was easy. The rest is a small matter of implementation, which I will leave as an exercise for the reader.

And, as Piet Hein said, if you take humor only for laughter and seriousness only seriously, you have misunderstood both….

Fixing and fixability as attribute and philosophy

Matthew Crawford’s The Case for Working With Your Hands has made the top of the NY Times website, and well deservedly so. His argument is that physical work, especially diagnostic worked involved in solving technical problems, are as fulfilling and intellectually stimulating as any desk job, though the hours may be longer and your fingers dirtier. For instance, you have think about your angle of attack not just in terms of the likelihood of being right, but the cost of finding out:

The attractiveness of any hypothesis is determined in part by physical circumstances that have no logical connection to the diagnostic problem at hand. The mechanic’s proper response to the situation cannot be anticipated by a set of rules or algorithms.

There probably aren’t many jobs that can be reduced to rule-following and still be done well. But in many jobs there is an attempt to do just this, and the perversity of it may go unnoticed by those who design the work process. Mechanics face something like this problem in the factory service manuals that we use. These manuals tell you to be systematic in eliminating variables, presenting an idealized image of diagnostic work. But they never take into account the risks of working on old machines. So you put the manual away and consider the facts before you. You do this because ultimately you are responsible to the motorcycle and its owner, not to some procedure.

Sounds like a good consultant to me. And the right kind of academic.

Buying an old Mercedes has certainly taught me something about expertise. I first tried taking it to the largest Mercedes dealer in Boston, whose reps took in the car wearing white coats and were utterly useless: The customer service rep had never heard of this particular model (it was the flagship at the time,) the computer system could not deal with cars before 1982, and come to think of it, the rep didn’t know much about cars in general. The mechanics seemed to be looking for a place to stick the computer diagnostic tool, nearly destroyed the suspension and tried to solve problems by "Easter Egging" – i.e., replacing parts until the problem disappears. Eventually I found a company that had both the knowledge of the car and the diagnostic philosophy required – to listen to the problem and determine what it is based on the few symptoms a car really has to give. What a relief – and what a fulfilling job it must be to work like that.

A colleague of mine remarked, a few weeks ago, that "nobody repairs anything anymore." A few years ago I bought my wife a nice everyday watch, a Seiko with a stainless steel chain. The chain broke, she took it in, and was told that the cost of fixing the chain would be so high that it would be better to just replace the watch. The watch was not designed to be repaired.

What little work I have been able to do on my old Mercedes has been joyful, since the car is designed to be fixed – the screws are solid (no plastic clips that rot over time) and accessible, everything is laid out with some logic, and if you sit down and think about it, you can figure the technology out (with, for me, the exception of the automatic gear boxes, which I don’t understand and wouldn’t have the tools and space to do anything with anyway.)

Crawford continues:

Some diagnostic situations contain a lot of variables. Any given symptom may have several possible causes, and further, these causes may interact with one another and therefore be difficult to isolate. In deciding how to proceed, there often comes a point where you have to step back and get a larger gestalt. Have a cigarette and walk around the lift. The gap between theory and practice stretches out in front of you, and this is where it gets interesting. What you need now is the kind of judgment that arises only from experience; hunches rather than rules. For me, at least, there is more real thinking going on in the bike shop than there was in the think tank.

Put differently, mechanical work has required me to cultivate different intellectual habits. Further, habits of mind have an ethical dimension that we don’t often think about. Good diagnosis requires attentiveness to the machine, almost a conversation with it, rather than assertiveness, as in the position papers produced on K Street. Cognitive psychologists speak of “metacognition,” which is the activity of stepping back and thinking about your own thinking. It is what you do when you stop for a moment in your pursuit of a solution, and wonder whether your understanding of the problem is adequate.

This is one reason I sometimes envy people who do "mere" programming for a living – the ability to have problems that have solutions, tell you when they are solved, and reward the laser-like focus both on the detail and the broader reflection (and abstraction) necessary to see the bigger picture. The problem-solving I am involved with on a daily basis is less a question of understanding what to do than it is a question of finding a way to express the solution in a way that convinces those who hold the key to it to actually do it. Assertiveness certainly helps, but, boy, would I love to just tinker for a while.

Anyway, I have but scratched the beginning of Crawford’s argument, but hey, I think I have gotten the gist of it. The rest I leave you to read on your own.

The datacenter is the new mainframe

From Greg Linden comes a link and a reference to a very interesting book by two Google engineers: The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines (PDF, 2.8Mb) by Luiz André Barroso and Urs Hölzle. This is a fascinating introduction to data center design, with useful discussions of architecture, how to do cooling and reduce power use (it turns out, for instance, that getting computers that use power proportionally to their level of use is extremely important).

I suspect that even highly experienced data center designers will find something useful here. The book is written for someone with some degree of technical expertise, but you do not need a deep background in computer science to find much here that is interesting and useful.

One of my recurring ideas (and I am by no means alone in thinking this) is that the Norwegian west coast, with its cool climate, relatively abundant hydroelectric energy and underused industrial infrastructure (we used to have lots of electrochemical and electrometallurgical plants) could be a great place to do most of Europe’s computing. Currently we sell our electric energy to Europe through power lines, which incurs a large energy loss. Moving data centers to Norway and distributing their functionality through fiberoptic cables seems a much more effective way of doing things to me, especially since that region of the country has a reasonable supply both of energy engineers and industrial workers with the skill set and discipline to run that kind of operation.

Now, if I could only find some investors…

From links to seeds: Edging towards the semantic web

Wolfram Alpha just may take us one step closer to the elusive Semantic Web, by evolving a communication protocol out of its query terms.

(this is very much in ruminating form – comments welcome)

Wolfram Alpha officially launched on May 18, an exciting new kind of "computational" search engine which, rather than looking up documents where your questions have been answered before, actually computes the answer. The difference, as Stephen Wolfram himself has said, is that if you ask what the distance is to the moon, Google and other search engines will find you documents that tells you the average distance, whereas Wolfram Alpha will calculate what the distance is right now, and tell you that, in addition to many other facts (such as the average). Wolfram Alpha does not store answers, but creates them every time. And it does primarily answer numerical, computable questions.

The difference between Google (and other search engines) and Wolfram Alpha is not so clear-cut, of course. If you ask Google "17 mpg in liters per 100km" it will calculate the result for you. And you can send Wolfram Alpha non-computational queries such as "Norway" and it will give an informational answer. The difference lies more in what kind of data the two services work against, and how they determine what to show you: Google crawls the web, tracking links and monitoring user responses, in a sense asking every page and every user of their services what they think about all web pages (mostly, of course, we don’t think anything about most of them, but in principle we do.) Wolfram Alpha works against a database of facts with a set of defined computational algorithms – it stores less and derives more. (That being said, they will both answer the question "what is the answer to life, the universe and everything" the same way….)

While the technical differences are important and interesting, the real difference between WA and Google lies in what kind of questions they can answer – to use Clayton Christensen’s concept, the different jobs you would hire them to do. You would hire Google to figuring out information, introduction, background and concepts – or to find that email you didn’t bother filing away in the correct folder. You would hire Alpha to answer precise questions and get the facts, rather than what the web collectively has decided is the facts.

The meaning of it all

Now – what will the long-term impact of Alpha be? Google has made us replace categorization with search – we no longer bother filing things away and remembering them, for we can find them with a few half-remembered keywords, relying on sophisticated query front-end processing and the fact that most of our not that great minds think depressingly alike. Wolfram Alpha, on the other hand, is quite a different animal. Back in the 80s, I once saw someone exhort their not very digital readers to think of the personal computer as a "friendly assistant who is quite stupid in everything but mathematics."  Wolfram Alpha is quite a bit smarter than that, of course, but the fact is that we now have access to this service which, quite simply, will do the math and look up the facts for us. Our own personal Hermione Granger, as it is.

I think the long-term impact of Wolfram Alpha will be to further something that may not have started with Google, but certainly became apparent with them: The use of search terms (or, if you will, seeds) as references. It is already common to, rather than writing out a URL, to help people find something by saying "Google this and you will find it". I have a couple of blogs and a web page, but googling my name will get you there faster (and you can misspell my last name and still not miss.) The risk in doing that, of course, is that something can intervene. As I read (in this paper) General Motors, a few years ago, had an ad for a new Pontiac model, at the end of which they exhorted the audience to "Google Pontiac" to find out more. Mazda quickly set up a web page with Pontiac in it, bought some keywords on Google, and quite literally Shanghaied GM’s ad.

Wolfram Alpha, on the other hand, will, given the same input, return the same answer every time. If the answer should change, it is because the underlying data has changed (or, extremely rarely, because somebody figured out a new way of calculating it.) It would not be because someone external to the company has figured out a way to game the system. This means that we can use references to Wolfram Alpha as shorthand – enter "budget surplus" in Wolfram Alpha, and the results will stare you in the face. In the sense that math is a language for expressing certain concepts in a very terse and precise language, Wolfram Alpha seeds will, I think, emerge as a notation for referring to factual information.

A short detour into graffiti

Back in the early-to-mid-90s, Apple launched one of the first pen-based PDAs, the Apple Newton. The Newton was, for its time, an amazing technology, but for once Apple screwed it up, largely because they tried to make the device do too much. One important issue was the handwriting recognition software – it would let you write in your own handwriting, and then try to interpret it. I am a physician’s son, and I certainly took after my father in the handwriting department. Newton could not make sense of my scribbles, even if I tried to behave, and, given that handwriting recognition is hard, it took a long time doing it. I bought one, and then sent it back. Then the Palm Pilot came, and became the device to get.

The Palm Pilot did not recognize handwriting – it demanded that you, the user, wrote to it in a sign language called Graffiti, which recognized individual characters. Most of the characters resembled the regular characters enough that you could guess what they were, for the others you either had to consult a small plastic card or experiment. The feedback was rapid, to experimenting usually worked well, and pretty soon you had learned – or, rather, your hand had learned – to enter the Graffiti characters rapidly and accurately.

Wolfram Alpha works in the same way as Graffiti did: As Steven Wolfram says in his talk at the Berkman Center, people start out writing natural language but pretty quickly trim it down to just the key concepts (a process known in search technology circles as "anti-phrasing".) In other words, by dint of patience and experimentation, we (or, at least, some of us) will learn to write queries in a notation that Wolfram Alpha understands, much like our hands learned Graffiti.

From links to seeds to semantics

Semantics is really about symbols and shorthand – a word is created as shorthand for a more complicated concept by a process of internalization. When learning a language, rapid feedback helps (which is why I th
ink it is easier to learn a language with a strict and terse grammar rather than a permissive one), simplicity helps, and a structure and culture that allows for creating new words by relying on shared context and intuitive combinations (see this great video with Stephen Fry and Jonathan Ross on language creation for some great examples.)

And this is what we need to do – gather around Wolfram Alpha and figure out the best way of interacting with the system -and then conduct "what if" analysis of what happens if we change the input just a little. To a certain extent, it is happening already, starting with people finding Easter Eggs – little jokes developers leave in programs for users to find. Pretty soon we will start figuring out the notation, and you will see web pages use Wolfram Alpha queries first as references, then as modules, then as dynamic elements.

It is sort of quirky when humans start to exchange query seeds (or search terms, if you will).  It gets downright interesting when computers start doing it. It would also be part of an ongoing evolution of gradually increasing meaningfulness of computer messaging.

When computers – or, if you will, programs – needed to exchange information in the early days, they did it in a machine-efficient manner – information was passed using shared memory addresses, hexadecimal codes, assembler instructions and other terse and efficient, but humanly unreadable encoding schemes. Sometime in the early 80s, computers were getting powerful enough that the exchanges gradually could be done in human-readable format – the SMTP protocol, for instance, a standard for exchanging email, could be read and even hand-built by humans (as I remember doing in 1985, to send email outside the company network I was on.) The world wide web, conceived in the early 90s and live to a wider audience in 1994, had at its core an addressing system – the URL – which could be used as a general way of conversing between computers, no matter what their operating system or languages. (To the technology purists out there – yes, WWW relies on a whole slew of other standards as well, but I am trying to make a point here) It was rather inefficient from a machine communication perspective, but very flexible and easy to understand for developers and users alike. Over time, it has been refined from pure exchange of information to the sophisticated exchanges needed to make sure it really is you when you log into your online bank – essentially by increasing the sophistication of the HTML markup language towards standards such as XML, where you can send over not just instructions and data but also definitions and metadata.

The much-discussed semantic web is the natural continuation of this evolution – programming further and further away from the metal, if you will. Human requests for information from each other are imprecise but rely on shared understanding of what is going on, ability to interpret results in context, and a willingness to use many clues and requests for clarification to arrive at a desired result. Observe two humans interacting over the telephone – they can have deep and rich discussions, but as soon as the conversation involves computers, they default to slow and simple communication protocols: Spelling words out (sometimes using the international phonetic alphabet), going back and forth about where to apply mouse clicks and keystrokes, double-checking to avoid mistakes. We just aren’t that good at communicating as computers – but can the computers eventually get good enough to communicate with us?

I think the solution lies in mutual adaptation, and the exchange of references to data and information in other terms than direct document addresses may just be the key to achieving that. Increases in performance and functionality of computers have always progressed in a punctuated equilibrium fashion, alternating between integrated and modular architectures. The first mainframes were integrated with simple terminal interfaces, which gave way to client-server architectures (exchanging SQL requests), which gave way to highly modular TCP/IP-based architectures (exchanging URLs), which may give way to mainframe-like semi-integrated data centers. I think those data centers will exchange information at a higher semantic level than any of the others – and Wolfram Alpha, with its terse but precise query structure may just be the way to get there.

Brilliant image from Google today

image Google change their main logo according to whim and season, a practice that I like. The logo I captured here is a reference to the lemur-like fossil recently found in Germany that just may be the missing link in the evolution from ape to man.

Of course, the missing link has been claimed before – the Piltdown man in particular. History will be the judge, but kudos to Google for quick thinking and a really cool illustration. Unapologetic science-geekiness rules!

Abercrombie & Fitch & truly moronic store policies

image While I am in the States, my family sends me orders for various items they would like me to bring back. Since I have three daughters and a rather dishy wife, that means shopping in places such as Abercrombie & Fitch, which are mall rat havens with pounding music, posters of meticulously depilated underage models artfully grabbing their crotches, and clusters of of sweet young things standing around staring into space, occasionally shouting “What?”. My standard approach is to bring a netbook or some printouts and convince one of these creatures to go get the stuff for me.

Usually this works out to everyone’s satisfaction, but not today. I found myself in the Abercrombie & Fitch store next to Faneuil Hall in Boston, looking for a specific top (pictured) in a specific size. The store had only one left, which they refused to sell to me. When I asked why, the salesperson (and, eventually, the store manager) explained to me that it was the last one in the store and belonged to the “Visual Team”, apparently an organizational entity with immense powers. They did offer to check whether it would be available in another store. The idea that they sell me their piece and get a new one from another store apparently did not occur to them.

A few years ago this would have occasioned some rather sarcastic attempts by yours truly to explain the lack of basic business instinct in this policy, but with advancing years I have come to understand that discussing anything with “managers” (who manage without authority, an interesting concept) is like hunting dairy cows with a scoped rifle, to steal a phrase from P. J. O’Rourke. So I shook my head and left.

As for the store policy, I just can’t get it: There is a recession, and retail is suffering along with everyone else. Abercrombie’s revenues are stagnating and their stock is down there with the rest of the market. And here I am, a customer wanting to buy a product they have, and rather than sell it to me they instead saddle me down with their own bureaucracy.

Someone once said that in all companies, we start out working for the customer and end up working for the CFO. In Abercrombie they work for the Visual Team – it clearly is more important how the store looks than whether any sales take place there. I don’t get it. But then again, I am just a lowly business school professor who thought selling stuff was what stores did.

I must be getting old. And sadly lacking in the depilation department.

Manic depressiveness as illness and lifestyle

Youtube turns out, no particular surprise, to be a fount of interesting info- and entertainment. After watching Stephen Fry about Gutenberg’s press, I came across a documentary he had done for BBC on bipolar disorder, also called manic depression. I found it very interesting because it lays out a good description of the illness and the consequences it has for patients and their families, all in a quiet and informative way that never becomes sensationalistic or titillating. It does become personal, though: You can see on Stephen Fry’s face in episode two, when he is informed of the severity of his own condition, that this is a hard message to get.

Mental illnesses are gradually becoming less of a taboo in society, and more and more we understand the underlying causes, though treatments to a large extent are experimental, treating symptoms rather than causes. This documentary, in an excellent fashion, shows the link between personality and illness – a surprising number of people with bipolar disorder like the manic phases, when creativity is flowing and inhibitions are lower. The illness is part of their personality as well, and potentially losing that is difficult choice to make.

Highly recommended. (The videos below may change, occasionally BBC kicks it off the ‘tube, then it appears again….)

 

Interesting Wolfram Alpha statistics

Here is the answer you get from entering "budget surplus" into Wolfram Alpha:

image 

Two things I did not know: The fifth largest government surplus in the world is held by Serbia, which surprises me, given that the country has 14% unemployment and a recovering economy, according to Wikipedia. And that Japan’s deficit is very close to the US’, indicating that things are not as bad in the US as you might think. Or perhaps that the numbers are a bit dated, but according to the source information, most of the numbers are from 2009.

Since May 17th is Norway’s national day, I think it behooves me to point out that of the five surplus states listed above, Norway is the nicest place to live, by most measures (weather, culture, politics, human rights, health care, etc. etc.). On the other hand, many of the countries with large deficits are nice places to live, so I wouldn’t read too much into the economics at all…

(Hat tip to Karthik, who retweeted one of my tweets, which I misunderstood and started researching….)

Stephen Fry and the Gutenberg press

This is a delightful program which explains how the Gutenberg press works – through the time-honored pedagogic technique of actually building one:

Someone goes on a hungry journey

City of Thieves City of Thieves by David Benioff

My review

rating: 5 of 5 stars
Bleak and terse but very likeable story about an orphaned adolescent and a soldier on an impossible quest in and around St.Petersburg (Leningrad) during the 900-day siege. The authenticity and details are moving, the language and plot fluid and there are moments of suspense and quite a bit of laconic humor. Highly recommended.

View all my reviews.