Category Archives: The thoughtful manager

On videoconferencing and security

Picture: Zoom

Yesterday began with a message from a business executive who was concerned with the security of Zoom, the video conferencing platform that many companies (and universities) have landed on. The reason was a newspaper article regurgitating several internet articles, partly about functionality that has been adequately documented by Zoom, partly about security holes that have been fixed a long time ago.

So is there any reason to be concerned about Zoom or Whereby or Teams or Hangouts or all the other platforms?

My answer is “probably not” – at least not for the security holes discussed here, and for ordinary users (and that includes most small- to medium sized companies I know about).

It is true that video conferencing introduces some security and privacy issues, but if we look at it realistically, the biggest problem is not the technology, but the people using it (Something we nerds refer to as PEBKAC – Problem Exists Between Keyboard and Chair.)

When a naked man sneaks into an elementary school class via Whereby, as happened a few days ago here in Norway, it is not due to technology problems, but because the teacher had left the door wide open, i.e., had not turned on the function that makes it necessary to “knock” and ask for permission to enter.

When anyone can record (and have the dialogue automatically transcribed) from Zoom, it is because the host has not turned off the recording feature. By the way, anyone can record a video conference with screen capture software (such as Camtasia), a sound recorder or for that matter a cell phone, and no (realistic) security system in the world can do anything about it.

When the boss can monitor that people are not using other software while sitting in a meeting (a feature that can be completely legitimate in a classroom, it is equivalent to the teacher looking beyond the class to see if the students are awake), well, I don’t think the system is to blame for that either. Any leader who holds such irrelevant meetings that people do not bother to pay attention should rethink their communications strategy. Any executive I know would have neither time nor interest in activating this feature – because if you need technology to force people to wake up, you don’t have a problem technology can solve.

The risk of a new tool should not be measured against some perfect solution, but against what the alternative is if you don’t have it. Right now, video conferencing is the easiest and best tool for many – so that is why we use it. But we have to take the trouble to learn how it works. The best security system in the world is helpless against people writing their password on a Post-It, visible when they are in videoconference.

So, therefore – before using the tool – take a tour of the setup page, choose carefully what features you want to use, and think through what you want to achieve by having the meeting.

If that’s hard, maybe you should cancel the whole thing and send an email instead.

Getting dialogue online

Bank in the nineties, I facilitated a meeting with Frank Elter at a Telenor video meeting room in Oslo. There were about 8 participants, and an invited presenter: Tom Malone from MIT.

The way it was set up, we first saw a one hour long video Tom had created, where he gave a talk and showed some videos about new ways of organizing work (one of the more memorable sequences was (a shortened version of) the four-hour house video.) After seeing Tom’s video, we spent about one hour discussing some of the questions Tom had raised in the video. Then Tom came on from a video conferencing studio in Cambridge, Massachusetts, to discuss with the participants.

The interesting thing, to me, was that the participants experienced this meeting as “three hours with Tom Malone”. Tom experienced it as a one hour discussion with very interested and extremely well prepared participants.

A win-win, in other words.

I was trying for something similar yesterday, guest lecturing in Lene Pettersen‘s course at the University of Oslo, using Zoom with early entry, chat, polling and all video/audio enabled for all participants. This was the first videoconference lecture for the students and for three of my colleagues, who joined in. In preparation, the students had read some book chapters and articles and watched my video on technology evolution and disruptive innovations.

For the two hour session, I had set up this driving plan (starting at 2 pm, or 14:00 as we say over here in Europe…):

Image may contain: Espen Andersen, eyeglasses

Leading the discussion. Zoom allows you to show a virtual background, so I chose a picture of the office I would have liked to have…

14:00 – 14:15 Checking in, fiddling with the equipment and making sure everything worked. (First time for many of the users, so have the show up early so technical issues don’t eat into the teaching time.)
14:15 – 14:25 Lene introduces the class, talks about the rest of the course and turns over to Espen (we also encouraged the students to enter questions they wanted addressed in the chat during this piece)
14:25 – 14:35 Espen talking about disruption and technology-driven strategies.
14:35 – 14:55 Students into breakout rooms – discussing whether video what it would take for video and digital delivery to be a disruptive innovation for universities. (Breaking students up into 8 rooms of four participants, asking them to nominate a spokesperson to take notes and paste them into the chat when they return, and to discuss the specific question: What needs to happen for COVID-19 to cause a disruption of universities, and how would such a disruption play out?
14:55 – 15:15 Return to main room, Espen sums up a little bit, and calls on spokesperson from each group (3 out of 8 groups) based on the notes posted in the chat (which everyone can see). Espen talks about the case and raises the next discussion question.
15:15 – 15:35 Breakout rooms, students discuss the next question: What needs to happen for DNB (Norway’s largest bank) to become a data-driven, experiment-oriented organization like What are the most important obstacles and how should they be dealt with?
15:35 – 15:55 Espen sums up the discussion, calling on some students based on the posts in the chat, sums up.
15:55 – 16:00 Espen hand back to Lene, who sums up. After 16:00, we stayed on with colleagues and some of the students to discuss the experience.

zoom dashboard

The dashboard as I saw it. Student names obscured.

Some reflections (some of these are rather technical, but they are notes to myself):

  • Not using Powerpoint or a shared screen is important. Running Zoom in Gallery view (I had set it up so you could see up to 49 at the same time) and having the students log in to Zoom and upload a picture gave a feeling of community. Screen and/or presentation sharing breaks the flow for everyone – When you do it in Zoom, the screen reconfigures (as it does when you come back from a breakout room) and you have to reestablish the participant panel and the chat floater. Instead, using polls and discussion questions and results communicated through the chat was easier for everyone (and way less complicated).
  • No photo description available.

    Satisfactory results, I would say.

    I used polls on three occasions: Before each discussion breakout, and in the end to ask the students what the experience was like. They were very happy about it and had good pointers on how to make it better

  • We had no performance issues and rock-steady connection the whole way through.
  • It should be noted that the program is one of the most selective in Norway and the students are highly motivated and very good. During the breakout sessions I jumped into each room to listen in on the discussion (learned that it was best to pause recording to avoid a voice saying “This session is being recorded” as I entered. The students were actively discussing in every group, with my colleagues (Bendik, Lene, and Katja) also participating. I had kept the groups to four participants, based on feedback from a session last week, where the students had been 6-7 and had issues with people speaking over each other.
  • Having a carefully written driving plan was important, but still, it was a very intense experience, I was quite exhausted afterwards. My advice on not teaching alone stands – in this case, I was the only one with experience, but that will change very fast. But I kept feeling rushed and would have liked more time, especially in the summary sections, would have liked to bring more students in to talk.
  • I did have a few breaks myself – during the breakout sessions – to go to the bathroom and replenish my coffee – but failed to allow for breaks for the students. I assume they managed to sneak out when necessary (hiding behind a still picture), but next time, I will explicitly have breaks, perhaps suggest a five minute break in the transition from main room to breakout rooms.

Conclusion: This can work very well, but I think it is important to set up each video session based on what you want to use it for: To present something, to run an exercise, to facilitate interaction. With a small student group like this, I think interaction worked very well, but it requires a lot of presentation. You have to be extremely conscious of time – I seriously think that any two-hour classroom session needs to be rescheduled to a three hour session just because the interaction is slower, and you need to have breaks.

As Winston Churchill almost said (he said a lot, didn’t he): We make our tools, and then our tools make us. We now have the tools, it will be interesting to see how the second part of this transition plays out.

Summer reading for the diligent digital technology student

eivindgEivind Grønlund, one of my students at the Informatics: Digital Business and Leadership program at the University of Oslo sent me an email asking about what to read during the summer to prepare for the fall.

Well, I don’t believe in reading textbooks in the summer, I believe in reading things that will excite you and make you think about what you are doing and slightly derail you in a way that will make you a more interesting person when Fall comes. In other words, read whatever you want.

That being said, the students at DigØk have two business courses next year – one on organization and leadership, one on technology evolution and strategy. Both will have a a focus on basics, with a flavor of high tech and the software business. What can you read to understand that, without having to dig into textbooks or books that may be on the syllabus, like Leading DigitalThe Innovator’s SolutionEnterprise Architecture as Strategy, or Information Rules?

Here are four books that are entertaining and wise and will give you an understanding of how humans and technology interact and at least some of the difficulties you will run into trying to manage them – but in a non-schoolbook context. Just the thing for the beach, the mountain-top, the sailboat.

  • 816Neal Stephenson: Cryptonomicon. The ultimate nerd novel. A technology management friend of mine re-reads this book every summer. It involves history, magic reality (the character of Enoch Root), humor, startup lore, encryption and, well, fun. Several stories in one: About a group of nerds (main protagonist: Randy Waterhouse) doing a startup in Manila and other places 1999, his grandfather, Randall P. Waterhouse, running cryptographic warfare against the Germans and Japanese during WWII, and how the stories gradually intersect and come together towards the end. The gallery of characters is hilarious and fascinating, and you can really learn something about startups, nerd culture, programming, cryptography and history along the way. Highly recommended.
  • 7090Tracy Kidder: The Soul of a New Machine. This 1981 book describes the development process of a Data General minicomputer as a deep case study of the people in it. It could just as well have been written about any really advanced technology project today – the characters, the challenges, the little subcultures that develop within a highly focused team stretching the boundaries for what is possible. One of the best case studies ever written. If you want to understand how advanced technology gets made, this is it.
  • 24113Douglas Hofstadter: Gödel, Escher, Bach. This book (aficionados just call it GEB) was recommended to me by one of my professors in 1983, and is responsible for me wanting to be in academia and have time and occasion to read books such as this one. It is also one of the reasons I think The Matrix is a really crap movie – Hofstadter said it all before, and I figured out the plot almost at once and thought the whole thing a tiresome copycat. Hofstader writes about patterns, abstractions, the concept of meta-phenomena, but mostly the book is about self-referencing systems, but as with any good book that makes you think it is breath-taking in what it covers, pulling together music, art, philosophy and computer science (including a bit on encryption, always a favorite) and history. Not for the faint-hearted, but as Erling Iversen, my old boss and an extremely well-read man, said: You can divide techies into two kinds: Those who have read Hofstadter, and those who haven’t.
  • 34017076Tim O’Reilly: WTF? What’s the Future and Why It’s Up to Us. Tim is the founder of O’Reilly and Associates (the premier source of hands-on tech books for me) and has been a ringsider and a participant in anything Internet and digital tech since the nineties. This fairly recent book provides a good overview of the major evolutions and battles during the last 10-15 years and is a great catcher-upper for the young person who has not been been part of the revolution (so far.)

And with that – have a great summer!

The history of software engineering


The History of Software Engineering
an ACM webinar presentation by
ACM Fellow Grady Booch, Chief Scientist for Software Engineering, IBM Software
(PDF slides here.)

Note: These are notes taken while listening to this webinar. Errors, misunderstandings and misses aplenty…

(This is one of the perks of being a member of ACM – listening to legends of the industry talking about how it got started…)

Trust is fundamental – and we trust engineering because of licensing and certification. This is not true of software systems – and that leads us to software engineering. Checks and balances important – Hammurabi code of buildings, for instance. First licensed engineer was Charles Bellamy, in Wyoming, in 1907, largely because of former failures of bridges, boilers, dams, etc.

Systems engineering dates back to Bell labs, early 1940s, during WWII. In some states you can declare yourself a software engineer, in others licensing is required, perhaps because the industry is young. Computers were, in the beginning, human (mostly women). Stibitz coined digital around 1942, Tukey coined software in 1952. 1968-69 conference on software engineering coined the term, but CACM letter by Anthony Oettinger used the term in 1966, but the term was used before that (“systems software engineering”), most probably originated by Margaret Hamilton in 1963, working for Draper Labs.

Programming – art or science? Hopper, Dijkstra, Knuth, sees them as practical art, art, etc. Parnas distinguished between computer science and software engineering. Booch sees it as dealing with forces that are apparent when designing and building software systems. Good engineering based on discovery, invention, and implementation – and this has been the pattern of software engineering – dance between science and implementation.

Lovelace first programmer, algorithmic development. Boole and boolean algebra, implementing raw logic as “laws of thought”.

First computers were low cost assistants to astronomers, establishing rigorous processes for acting on data (Annie Cannon, Henrietta Leavitt.) Scaling of problems and automation towards the end of the 1800s – rows of (human) computers in a pipeline architecture. The Gilbreths created process charts (1921). Edith Clarke (1921) wrote about the process of programming. Mechanisation with punch cards (Gertrude Blanch, human computing, 1938; J Presper Eckert on punch car methods (1940), first methodology with pattern languages.

Digital methods coming – Stibitz, Von Neumann, Aitken, Goldstein, Grace Hopper with machine-independent programming in 1952, devising languages and independent algorithms. Colossus and Turing, Tommy Flowers on programmable computation, Dotthy du Boisson with workflow (primary operator of Colossus), Konrad Zuse on high order languages, first general purpose stored programs computer. ENIAC with plugboard programming, dominated by women, (Antonelli, Snyder, Spence, Teitelbaum, Wescoff). Towards the end of the war: Kilburn real-time (1948), Wilson and Gill subroutines (1949), Eckert and Mauchly with software as a thing of itself (1949). John Bacchus with imperative programming (Fortran, 1946), Goldstein and von Neumann flowcharts (1947). Commercial computers – Leo for a tea company in England. John Pinkerton creating operating system, Hoper with ALGOL and COBOL, reuse (Bener, Sammet). SAGE system important, command and control – Jay Forrester and Whirlwind 1951, Bob Evans (Sage, 1957), Strachey time sharing 1959, St Johnson with the first programming services company (1959).

Software crisis – not enough programmers around, machines more expensive than the humans, priesthood of programming, carry programs over and get results, batch. Fred Brooks on project management (1964), Constantin on modular programming (1968), Dijkstra on structured programming (1969). Formal systems (Hoare and Floyd) and provable programs; object orientation (Dahl and Nygaard, 1967). Main programming problem was complexity and productivity, hence software engineering (Margaret Hamilton) arguing that process should be managed.

Royce and the waterfall method (1970), Wirth on stepwise refinement, Parnas on information hiding, Liskov on abstract data types, Chen on entity-relationship modelling. First SW engineering methods: Ross, Constantine, Yourdon, Jackson, Demarco. Fagin on software inspection, Backus on functional programming, Lamport on distributed computing. Microcomputers made computing cheap – second generation of SW engineering: UML (Booch 1986), Rumbaugh, Jacobsen on use cases, standardization on UML in 1997, open source. Mellor, Yourdon, Worfs-Brock, Coad, Boehm, Basils, Cox, Mills, Humphrey (CMM), James Martin and John Zachman from the business side. Software engineering becomes a discipline with associations. Don Knuth (literate programming), Stallman on free software, Cooper on visual programming (visual basic).

Arpanet and Internet changed things again: Sutherland and SCRUM, Beck on eXtreme prorgamming, Fowler and refactoring, Royce on Rational Unified Process. Software architecture (Kruchten etc.), Reed Hastings (configuration management), Raymond on open source, Kaznik on outsourcing (first major contract between GE and India).

Mobile devices changed things again – Torvalds and git, Coplien and organiational patterns, Wing and computational thinking, Spolsky and stackoverflow, Robert Martin and clean code (2008). Consolidation into cloud: Shafer and Debois on devops (2008), context becoming important. Brad Cox and componentized structures, service-oriented architectures and APIs, Jeff Dean and platform computing, Jeff Bezos.

And here we are today: Ambient computing, systems are everywhere and surround us. Software-intensive systems are used all the time, trusted, and there we are. Computer science focused on physics and algorithms, software engineering on process, architecture, economics, organisation, HCI. SWEBOK first 2004, latest 2014, codification.

Mathematical -> Symbolic -> Personal -> Distributed & Connected -> Imagined Realities

Fundamentals -> Complexity -> HCI -> Scale -> Ethics and morals

Scale is important – risk and cost increases with size. Most SW development is like engineering a city, you have to change things in the presence of things that you can’t change and cannot change. AI changes things again – symbolic approaches and connectionist approaches, such as Deepmind. Still a lot we don’t know what to do – such as architecture for AI, little rigorous specification and testing. Orchestration of AI will change how we look at systems, teaching systems rather than programming them.

Fundamentals always apply: Abstraction, separation, responsibilities, simplicity. Process is iterative, incremental, continuous releases. Future: Orchestrating, architecture, edge/cloud, scale in the presence of untrusted components, dealing with the general product.

“Software is the invisible writing that whispers the stories of possibility to our hardware…” Software engineering allows us to build systems that are trusted.


Brilliance squared

Stephen Fry and Steven Pinker are two of the people I admire the most, for their erudition, extreme levels and variety of learning, and willingness to discuss their ideas. Having them both on stage at the same time, one interviewing the other (on the subject of Pinker’s last book, Enlightenment Now), is almost too much, but here they are:

(I did, for some reason, receive an invitation to this event, and would have gone there despite timing and expense if at all possible, but it was oversubscribed before I could clink the link. So thank whomever for Youtube, I say. It can be used to spread enlightenment, too.)

Interesting interview with Rodney Brooks

sawyer_and_baxterBoingboing, which is a fantastic source of interesting stuff to do during Easter vacation, has a long and fascinating interview by Rob Reid with Rodney Brooks, AI and robotics researcher and entrepreneur extraordinaire. Among the things I learned:

  • What the Baxter robot really does well – interacting with humans and not requiring 1/10 mm precision, especially when learning
  • There are not enough workers in manufacturing (even in China), most of the ones working there spend their time waiting for some expensive capital equipment to finish
  • The automation infrastructure is really old, still using PLCs that refresh and develop really slowly
  • Robots will be important in health care – preserving people’s dignity by allowing them to drive and stay at home longer by having robots that understand force and softness and can do things such as help people out of bed.
  • He has written an excellent 2018 list of dated predictions on the evolution of robotic and AI technologies, highly readable, especially his discussions on how to predict technologies and that we tend to forget the starting points. (And I will add his blog to my Newsblur list.)
  • He certainly doesn’t think much of the trolley problem, but has a great example to understand the issue of what AI can do, based on what Isaac Newton would think if he were transported to our time and given a smartphone – he would assume that it would be able to light a candle, for instance.

Worth a listen..

Neural networks – explained

As mentioned here a few times, I teach an executive course called Analytics for strategic management, as well as a short program (three days) called Decisions from Data: Driving an Organization on Analytics. We have just finished the first version of both of these courses, and it has been a very enjoyable experience. The students (in both courses) have been interested and keen to learn, bringing relevant and interesting problems to the table, and we have managed do what it said on the tin (I think) – make them better consumers of analytics, capable of having a conversation with the analytics team, employing the right vocabulary and being able to ask more intelligent questions.

Of course, programs of this type does not allow you do dive deep into how things work, though we have been able to demonstrate MySQL, Python and DataRobot, and also give the students an understanding of how rapidly these things are evolving. We have talked about deep learning, for instance, but not how it works.

But that is easy to fix – almost everything about machine learning is available on Youtube and in other web channels, once you are into a little bit of the language. For instance, to understand how deep learning works, you can check out a series of videos from Grant Sanderson, who produces very good educational videos on the web site 3 blue one brown.

(There are follow-up videos: Chapter 2, Chapter 3, and Chapter 3 (formal calculus appendix). This Youtube channel has a lot of other math-related videos, too, including a great explanation of how Bitcoin works, which I’ll have to get into at some points, since I keep being asked why I don’t invest in Bitcoin all the time.)

Of course, you have to be rather interested to dive into this, and it certainly is not required read for an executive who only wants to be able to talk intelligently to the analytics team. But it is important (and a bit reassuring) to note the mechanisms employed: Breaking a very complex problem up into smaller problems, breaking those up into even smaller problems. solving the small problems by programming, then stepping back up. For those of you with high school math: It really isn’t that complicated. Just complicated in layers.

And it is good to know that all this advanced AI stuff really is rather basic math. Just applied in an increasingly complex way, really fast.

The nastiness of immigrant fear

This piece ( by Maria (Farrell?) is a long and very insightful read about the emotional impact on immigrants from the Brexit debacle – and more generally, about the nastiness of reducing immigrants (or, for that matter, any foreigner) to a number and a category.

Anyone who thinks being an immigrant, even a deluxe EU three million-type immigrant, is easy, should try it. We compete on equal terms with all comers, but with no social or economic safety net and, for many, hustling like mad in second and third languages. No dole, no network of couches to sleep on, no contacts and no introductions; qualifications from institutions you’ve never heard of, references from employers you aren’t sure are real but can’t be bothered to check, acting as daily fodder for stereotypical jokes we laugh off to show we’re one of you. You don’t hear us complaining about it because it’s just part of the deal. But when the terms of the deal change, and you tell us we’re social welfare parasites who are also, somehow, taking all the jobs and are the reason the country is failing, then the deal is probably dead.

How anyone can think shutting yourself off from the world and fantasise about going back to a nonexistent 1960s idyll is in any way beneficial is beyond me. And this nastiness is not limited to Britain or Trump’s USA, far from it, Norway has its share of little people with big fears as well.

To get new ideas, increase the variety of sources, expose yourself to new experiences, and embrace that which you cannot understand.

Assuming you want new ideas, of course.

A tour de Fry of technology evolution

There are many things to say about Stephen Fry, but enough is to show this video, filmed at Nokia Bell Labs, explaining, amongst other things, the origin of microchips, the power of exponential growth, the adventure and consequences of performance and functionality evolution. I am beginning to think that “the apogee, the acme, the summit of human intelligence” might actually be Stephen himself:

(Of course, the most impressive feat is his easy banter on hard questions after the talk itself. Quotes like: “[and] who is to program any kind of moral [into computers ]… If [the computer] dives into the data lake and learns to swim, which is essentially what machine learning is, it’s just diving in and learning to swim, it may pick up some very unpleasant sewage.”)

Science fiction and the future

I am on the editorial board of ACM Ubiquity – and we are in the middle of a discussion of whether science fiction authors get things right or not, and whether science fiction is a useful predictor of the future. I must admit I am not a huge fan of science fiction – definitely not films, which tend to contain way too many scenes of people in tights staring at screens. But I do have some affinity for the more intellectual variety which tries to say something about our time by taking a single element of it and magnifying it.

So herewith, a list of technology-based science fiction short stories available on the web, a bit of fantasy in a world where worrying about the future impact of technology is becoming a public sport:

  • The machine stops by E. M. Forster is a classic about what happens when we make ourselves completely dependent on a (largely invisible) technology. Something to think about when you sit surfing and video conferencing  in your home office. First published in 1909, which is more than impressive.
  • The second variety by Philip K. Dick is about what happens when we develop self-organizing weapons systems – a future where warrior drones take over. Written as an extension of the cold war, but in a time where you can deliver a hand grenade with a drone bought for almost nothing at Amazon and remote-controlled wars initially may seem bloodless it behooves us to think ahead.
  • Jipi and the paranoid chip is a brilliant short story by Neal Stephenson – the only science-fiction author I read regularly (though much of what he writes is more historic/technothrillers than science fiction per se). The Jipi story is about what happens when technologies develop a sense of self and self preservation.
  • Captive audience by Ann Warren Griffith is perhaps not as well written as the others, but still: It is a about a society where we are not allowed not to watch commercials. And that should be scary enough for anyone bone tired of alle the intrusive ads popping up everywhere we go.

There is another one I would have liked to have on the list, but I can’t remember the title or the author. It is about a man living in a world where durable products are not allowed – everything breaks down after a certain time so that the economy is maintained because everyone has to buy new things all the time. The man is trying to smuggle home a wooden garden bench made for his wife out of materials that won’t degrade, but has trouble with a crumbling infrastructure and the wrapping paper dissolving unless he gets home soon enough…

Big Data and analytics – briefly

DFDDODData and data analytics is becoming more and more important for companies and organizations. Are you wondering what data and data science might do for your company? Welcome to a three-day ESP (Executive Short Program) called Decisions from Data: Driving an Organization with Analytics. It will take place at BI Norwegian Business School from December 5-7 this year. The short course is an offshoot from our very popular executive programs Analytics for Strategic Management, which are fully booked. (Check this list (Norwegian) for a sense of what those students are doing.)

Decisions from Data is aimed at managers who are curious about Big Data and data science and wants an introduction and an overview, without having to take a full course. We will talk about and show various forms of data analysis, discuss the most important obstacles to becoming a data driven organization and how to deal with data scientists, and, of course, give lots of examples of how to compete with analytics. The course will not be tech heavy, but we will look at and touch a few tools, just to get an idea of what we are asking those data scientists to do.

The whole thing will be in English, because, well, the (in my humble opinion) best people we have on this (Chandler Johnson og Alessandra Luzzi) are from the USA and Italy, respectively. As for myself, I tag along as best I can…

Welcome to the data revolution – it start’s here!

Singularity redux

From Danny Hillis: The Pattern on the Stone, which I am currently reading hunting for simple explanations of technological things:

Because computers can do some things that seem very much like human thinking, people often worry that they are threatening our unique position as rational beings, and there are some who seek reassurance in mathematical proofs of the limits of computers. There have been analogous controversies in human history. It was once considered important that the Earth be at the center of the universe, and our imagined position at the center was emblematic of our worth. The discovery that we occupied no central position – that  our planet was just one of a number of planets in orbit around the Sun – was deeply disturbing to many people at the time, and the philosophical implications of astronomy became a topic of heated debate. A similar controversy arose over evolutionary theory, which also appeared as a threat to humankind’s uniqueness. At the root of these earlier philosophical rises was a misplaced judgment of the source of human worth. I am convinced that most of the current philosophical discussions about the limits of computers are based on a similar misjudgment.

And that, I think, is one way to think about the future and intelligence, natural and artificial. Works for me, for now. No idea, of course, whether this still is Danny’s position, but I rather think it is.

Sapiens unite!

Sapiens: A Brief History of HumankindSapiens: A Brief History of Humankind by Yuval Noah Harari

My rating: 4 of 5 stars

This book (recommended by Grady Booch in his recent talk) attempts to give a brief history of mankind – specifically, Homo Sapiens, as opposed to Neanderthals and other hominids – in one book (a bit reminicent of Geoffrey Blainey’s A Short History of the World.) As such it is interesting, especially the early parts about the transition from hominids to collaborating humans and the cognitive revolution 70000 years ago. It is very clearly written – for instance, the chapter on capitalism and the importance of credit and creditworthiness is something I could hand out to my students directly as a brief explanation of what the fuzz is all about.) The book has been a success, and deservedly so – very rationalist, well informed, if a bit narrow in perspective here and there. The author seems to have a soft spot for hunter-gatherer societies (leading him to describe the agricultural revolution as a step backward for individuals, if not for the human race) and a digression on whether humans are more or less happy now (has historical progress done anything to our serotonine levels (answer: no, it hasn’t, which sort of renders the argument about agrarianism mot) veers towards ranting.

The best part is the way the author describes how much of history and out place in it now is based on inter-subjective fantasies – such as money, religion and states, which exist purely in our minds, because we agree between ourselves that they do.

And easy read, entertaining, and with quite a few very quotable passages here and there, for instance these on our bioengineered future:

Biologists the world over are locked in battle with the intelligent-design movement, which opposes the teaching of Darwinian evolution in schools and claims that biological complexity proves there must be a creator who thought out all biological details in advance. The biologists are right about the past, but the proponents of intelligent design might, ironically, be right about the future.

Most of the organisms now being engineered are those with the weakest political lobbies – plants, fungi, bacteria and insects.


View all my reviews

The disrupted history professor

Jill Lepore, Harvard HistorianProfessor Jill Lepore, chair of Harvard’s History and Literature program, has published an essay in the New Yorker, sharply critical of Clayton Christensen and his theory of disruptive innovations. The essay has generated quite some stir, including a rather head-shaking analysis by Will Oremus in Slate.

I find Lepore’s essay rather puzzling, and, quite frankly, unworthy of a professor of history, Harvard or not. At this point, I should say that I am not an unbiased observer here – clayClay is a personal friend of mine, we went through the doctoral program at Harvard Business School together (he started a year before me), he was on my thesis committee (having graduated three years ahead of me) and we have kept in touch, including him coming to Norway for a few visits and one family vacation including a great trip on Hurtigruten. Clay is commonly known as the “gentle giant” and one of the most considerate, open and thoughtful people I know, and seeing him subjected to vituperating commentary from morons quite frankly pains me.

Professor Lepore’s essay has one very valid point: Like any management idea, disruptive innovation is overapplied, with every technology company or web startup claiming that their offering is disruptive and therefore investment-worthy. As I previously have written: If a product is described as disruptive, it probably isn’t. A disruptive product is something your customers don’t care about, with worse performance than what you have, and with lower profit expectations. Why in the world would you want to describe your offering as disruptive?

That being said, professor Lepore’s (I will not call her Jill, because that seems to be a big issue for some people. But since I have met Clay (most recently last week, actually), I will refer to him as Clay)  essay shows some remarkable jumps to non-conclusions: She starts out with a very fine summary of what the theory of disruption says:

Christensen was interested in why companies fail. In his 1997 book, “The Innovator’s Dilemma,” he argued that, very often, it isn’t because their executives made bad decisions but because they made good decisions, the same kind of good decisions that had made those companies successful for decades. (The “innovator’s dilemma” is that “doing the right thing is the wrong thing.”) As Christensen saw it, the problem was the velocity of history, and it wasn’t so much a problem as a missed opportunity, like a plane that takes off without you, except that you didn’t even know there was a plane, and had wandered onto the airfield, which you thought was a meadow, and the plane ran you over during takeoff. Manufacturers of mainframe computers made good decisions about making and selling mainframe computers and devising important refinements to them in their R. & D. departments—“sustaining innovations,” Christensen called them—but, busy pleasing their mainframe customers, one tinker at a time, they missed what an entirely untapped customer wanted, personal computers, the market for which was created by what Christensen called “disruptive innovation”: the selling of a cheaper, poorer-quality product that initially reaches less profitable customers but eventually takes over and devours an entire industry.

She then goes on to say that the theory is mis- and overapplied, and I (and certainly Clay) couldn’t agree more. Everyone and their brother is on an innovation bandwagon and way too many consulting companies are peddling disruption just like they were peddling business process reengineering back in the nineties (I worked for CSC Index and caught the tail end of that mania. Following this, she points out that Clay’s work is based on cases (it is), is theory-building rather than theory-confirming (yep) and that you can find plenty of cases of things that were meant to be disruptive that weren’t, or companies that were disruptive but still didn’t succeed. All very well, though, I should say, much of this is addressed in Clay’s later books and various publications, including a special issue of Journal of Product Innovation Management.

(Curiously, she mentions that she has worked as an assistant to Michael Porter‘s assistant, apparently having a good time and seeing him as a real professor. She then goes on to criticise the theory of disruptive innovation as having no predictive power – but the framework that Porter is most famous for, the five forces, has no predictive power either: It is a very good way to describe the competitive situation in an industry by offers zero guidance as to what you actually should do if you are, say, in the airline industry, which scores very badly on all five dimensions. There is a current controversy between Clay and Michael Porter on where the Harvard Business School (and, by implication, business education in general) should go. The controversy is, according to Clay, mostly “ginned up” in order to make the Times article interesting, but I do wonder what professor Lepore’s stakes are here.)

The trouble with management ideas is that while they can be easily dismissed when commoditized and overapplied, most of them actually start out as very good ideas within their bounds. Lepore feels threatened by innovation, especially the disruptive kind, because it shows up both in her journalistic (she is a staff writer with the New Yorker) and academic career. I happen to think that the framework fits rather well in the newspaper industry, but then again, I have spent a lot of time with Schibsted, the only media company in the world that has managed to make it through the digital transition with top- and bottom-line growth, largely by applying Clay’s ideas. But for Lepore, innovation is a problem because it is a) unopposed by intellectuals, b) happening too fast, without giving said intellectuals time to think, and c) done by the wrong kind of people (that is, youngsters slouching on sofas, doing little work since most of their attention is spent on their insanely complicated coffee machines, which “look like dollhouse-size factories”.) I am reminded of “In the beginning…was the command line.”, Neal Stephenson‘s beautiful essay about technology and culture, where he points out that in

… the heyday of Communism and Socialism, [the] bourgeoisie were hated from both ends: by the proles, because they had all the money, and by the intelligentsia, because of their tendency to spend it on lawn ornaments.

And then Lepore turns bizarre, saying that disruptive innovation does not apply in journalism (and, by extention, academia) because “that doesn’t make them industries, which turn things into commodities and sell them for gain.” Apparently, newspapers and academia should be exempt from economic laws because, well, because they should. (I have had quite a few discussions with Norwegian publishing executives, who seem to think so for their industry, too.)

I think newspapers and academic institutions are industries – knowledge industries, operating in a knowledge economy, where things are very much turned into commodities these days, by rapidly advancing technology for generating, storing, finding and communicating information. The increased productivity of knowledge generation will mean that we will need fewer, but better, knowledge institutions. Some of the old ones will survive, even prosper. Some will be disrupted. Treating disruptive innovation as a myth certainly is one option, but I wish professor Lepore would base that decision on something more than what appears to be rhetorical comments, a not very careful reading of the real literature, and, quite frankly, wishful thinking.

But I guess time – if not the Times – will show us what happens in the future. As for disruption, I would rather be the disruptor than the disruptee. I would have less money and honor, but more fun. And I would get to write the epitaph.

But then again, I have an insanely complicated coffee machine. And now it is time to go and clean it.

Interview by Peter Lorange

Peter Lorange has posted an interview (in German) with me on his blog. (Trust me, I don’t speak much German beyond “Bitte, bitte Kellner, noch ein Bier.” I can sort of read it, though.) I will be teaching a class (with Margherita Pagani) at his institute in a couple weeks.

I am posting the English version of the interview below – good questions, and I am happy with the answers, too.

1. There is no single business which is not affected by information technology in whatsoever way. Is there a general rule of thumb to follow in the e-commerce?

No – except, possibly, don’t ignore it or treat it as an afterthought to your regular business. E-commerce is increasingly the “normal” way to do things, and companies that are successful devote much time and resource to making sure that their value proposition online is as well thought out and delivered as anything else they do.

A persistent problem in many businesses is that, faced with new, Internet-enabled competition, they try to preserve their existing distribution channels and business models by mimicking them online. In some instances, this is the right thing to do, but surprisingly often, doing so leaves you open to competitors that have no existing business to defend. Competition online is often subject to strong network effects, meaning that it can be extremely important to establish a dominant position early – before the economics of the market financially justifies it.

2. Some businesses are quite successful although they are not working with the newest, latest technology. Why, to quote the title of one of your papers, does the best technology not always win?

My point about the best technology not always winning is more directed at technologists – who often things that the technology that is “best” in a technical sense (most advanced, say, or most adherent to the principles of technology the technologist believes in) deserves to win. If it doesn’t, the technologist concludes that this is due to a conspiracy, most commonly arranged by whoever ended up winning the market – be it Microsoft, Google, Facebook, or whoever.

For every new technology that comes along, you will always find a number of initiatives and companies that didn’t quite cut it, even though their ideas were right and the implementation beautiful. Perhaps their timing was wrong. Perhaps they chose the wrong initial market. Perhaps they were part of a larger company that didn’t understand the importance of the new technology or were fearful of its consequences. Or perhaps they just had bad luck.

3. Conversion is a keyword in today’s e-commerce. How can a merchant convert visits on his website into sales?

To a large extent, what makes you successful online no different from what makes you successful in any business: Offering a good product or service at a price the customer is willing to pay. Being online, however, allows plenty of opportunities to surround your existing product or service with electronically enabled experiences and extensions.

A big problem with many online offerings, in addition, is that they impose complicated procedures for the customer, especially around payment. Sometimes these complications are deliberate – to make sure online sales do not harm traditional sales, for instance. Sometimes they are a result of thinking too much about security. Sometimes they come from an insistence on making electronic commerce simple for the company rather than the customer. Small differences in design, especially of the process the customer has to go through, can make a big difference.

4. Design is not just what it looks like. Design is how it works, said Apple co-founder Steve Jobs. You claim that the presence on the web is no longer determined by having a nice web site only. And yet, contemporary web-based design can be used to generate business. Where do you see specific triggers at the interface between design, usability and conversion?

In general, design of a web site matters – but content matters more than colors, pictures and logos. Specifically, many companies forget that customers will enter their web site not through the front door – i.e., through the home page – but directly into any page visible, often through search engines. This means that you must design your web page not just to be esthetically pleasant – it must also be logical in its structure, be consistent in its message and quality no matter where the customers come from, and, most importantly, easy to find and link to for customers. If you type in the name of your product or your product category in a search engine, your site better be the one that pops up on top – or you have done something wrong.

5. For a few years already another buzzword has been on everyone’s lips: social media. In brief, where do you see the most poignant relation between higher sales and social media – if there is any at all?

Social media can be important – especially if you sell branded, high-end goods and services. They can enhance your value offering by providing customers contact with each other –many technology companies, for instance, use electronic forums to let customers help each other use, fix and even extend their offerings. They can also be a threat – news travels extremely fast on social networks, and you certainly don’t want to be the company whose poor service or stiff prices everyone is talking about. That being said, social networks offer you a chance to quickly fix mistakes – and to communicate how fast you fixed them. In short, social media offers you and your reputation everything a small town offers – only on a much larger and much faster scale.

6. Online business and e-commerce promises opportunities. On the downside, like everything, ecommerce is not only related with opportunities, but also with threats.

For most companies, e-commerce is a good opportunity, but for many it can be the first chink in the armor, the first sign that an industry upheaval is on its way. For the music industry, for publishers, for newspapers and for anyone selling access to information or entertainment, e-commerce can, long-term, be a threat to the company’s whole existence. The key lies in recognizing this threat early and turning the digital marketplace into an opportunity. For every industry facing a disruptive innovation threat such as e-commerce, there are companies that go out of existence, but also existing companies that seize the initiative and thrive in a digital environment. Often, these companies owe their existence to executives who had the foresight to see what was going to happen before it showed up in the financial results – and the legitimacy with their shareholders and their workforce to take action before everyone could see that it was necessary. Surprisingly often, these executives were not technical specialists – but they understood their business thoroughly, and that makes all the difference.

MIT CISR Research Briefing on Enterprise Search

imageLast year I had the pleasure of spending time as a visiting scholar at MIT Center for Information Systems Research, and here is one of the results, now available from CISR’s website (free, but you have to register. Absolutely worth it, lots of great material):

Research Briefing: Making Enterprise Search Work: From Simple Search Box to Big Data Navigation; Andersen; Nov 15, 2012

Most executives believe that their companies would perform better if “we only knew what we knew.” One path to such an objective is enhanced enterprise search. In this month’s briefing, “Making Enterprise Search Work: From Simple Search Box to Big Data Navigation,” Visiting Scholar Espen Andersen highlights findings from his research on enterprise search. He notes that enterprise search plays a different role from general web or site-specific searches and it comes with its own unique set of challenges – most notably privacy. But companies trying to capitalize on their knowledge will invariably find search an essential tool. Espen offers practical advice on how to develop a more powerful search capability in your firm.

The political process of getting innovation done

Innovation is often about politics. Together with my excellent colleague Ragnvald Sannes I run a course called Strategic Business Development and Innovation (it is done in Norwegian, but if you are interested, we would be glad to export the concept, in English), where we take groups of students through an innovation process (with their own, very real, projects) over two semesters. The course is done in cooperation with Accenture’s Technology Lab in Sophia Antipolis and is one of the most enjoyable things I do as a teacher.

Anyway. This note is to discuss something which came up in a web conference today – the political side of doing innovation. Many of the students we have come from public organizations, from the health care industry, or from educational or research-based institutions. In all of them (well, actually, in all organizations, but more so in those where profit is not the yardstick that trumps everything) politics are important, to the point where a project’s success depends on it. Since a number of our students also are engineers and/or IT people, with a very straightforward and rationalistic view of how things should be done (if the solution is better than the current one, well, then why don’t we adopt it?), I need to explain the nature of political processes in organizations.

I am not an expert in that particular field, but I have been involved in a few projects where politics have been important – and have found the work of March, Cohen and Olsen very useful – not just as theory, but also as a very practical checklist. These three professors are famous for the Garbage Can Model, explained in the classic article Cohen, M. D., J. G. March and J. P. Olsen (1972). “A Garbage Can Model of Organizational Choice.” Administrative Science Quarterly 17(1). This article (which can be found here) is cited more than 6000 times and makes a lot of sense to me, but the it is not easy to understand (and that is not just because the specification of the model is in Fortran source code.) It posits that politically oriented organizations (they studied universities in particular, which for most purposes are anarchies) makes decisions by constructing “garbage cans” (one for each decision) and that the garbage can is a meeting point for choices, problems, solutions, and decision makers (participants), heavily dependent on energy. Decisions seek decision makers, solutions seek problems, and vice versa. Getting things done in such an environment means constructing these garbage cans and filling them with the right combination of problems, solutions, choices and participants.

This sounds rather theoretical, and is. Fortunately, March and Olsen wrote an (in my opinion) excellent book (Cohen, M. D. and J. G. March (1986). Leadership and Ambiguity: The American College President. Boston, MA, Harvard Business School Press.) a few years later, with less theory and more application. Based on interviews with a number of university presidents as well as their garbage can model, they discuss the nature of getting things done in a university environment, where there is ambiguity of purpose, power, experience and success. They finish with a list of eight basic tactics for getting things done – probably at the instigation of Harvard Business School Press, which primarily caters to business people and want applicability, not just description.

I have found this list tremendously useful when trying to get decisions made – and have observed others doing this both very well and very badly. Here it is, with their points in boldface and my (probably imperfect, it is a few years since I read this) interpretation appended:

  1. Spend time. Getting things done will take time – you need to talk to people, create language, make people see your point. If you are not willing to spend that time, you might make some decisions, but people will not follow them. Decision making is social, so decision makers in these environments need to be. The winners in political organizations are often those with the most time – which is why many universities are dominated by the administration rather than the faculty, who have other calls for their time and do not come in to the office every day. (See this cartoon for an excellent description).
  2. Persist. One of the most frustrating things (I have seen this when businessmen come in to lead political organizations, several times) in a political setting is the decisions seldom seem to really be taken – there might be a decision, but every time it comes up, it get revisited. In other words, a decision made can always be raised again – so never give up, you can always get the organization to reconsider, either the same decision directly or the same decision dressed up in new language.
  3. Exchange status for substance. As someone said at some point, it is amazing what you can get done if you are prepared to forgo recognition for it. There are many leaders who want to look good and make decisions, but don’t have the knowledge or energy to do so. Make decisions easy for them – you can get a lot done if you make decision-makers look good in the process.
  4. Facilitate opposition participation. Rather than trying to overpower the opposition, find ways for them to participate in the new way of doing things. This is one of the reason why processes and fields frequently get renamed – to allow groups to continue doing what they are doing or want to do, but in new contexts.
  5. Overload the system (to change decision making style). Decision-making time expands to fill the entire time available (alternatively, a normal meeting is over when everything is said, an academic meeting is over when everything has been said by everyone.) By giving the system lots of decisions to make (i.e., many ), this style changes – and you can get your decision through because nobody has enough time or energy to give it the full treatment.
  6. Provide garbage cans. Provide arenas for discussion as distractions, to consume energy from decision-makers.
  7. Manage unobtrusively. You can get things changed by changing small things, and in succession. I have seen examples where you get a strategic goal set up that everyone can agree to but few define (“make us a more knowledge-based organization”), get resources allocated to it, and then propose lots of projects under this heading – which now is about fulfillment of a strategy (albeit redefined) rather than an entirely new strategic direction.
  8. Interpret history. Volunteer to write meeting minutes, and distribute them late enough that most participants have forgotten the details. History, traditionally, is written by the winners (except, perhaps, for the Spanish Civil War,) but you can make it the other way around – that you become the winner by writing history.

(After writing most of this I found this blog post by David Maister which summarizes this much better than me, in the context of professional service firms): Understanding politics is very much about recognizing these tactics and using them. It may seem Machiavellian, but then Machiavelli was one of the first political theorists and knew what he was talking about.

And now I feel a need to see the next episode of House of Cards on Netflix. Garbage cans in action…

Doc Searls on the market of one

I quite liked Doc Searls’ piece in the Wall Street Journal on the market of one – or as he calls it, power to individuals to broadcast their intents until they find a vendor that matches what they want, not what the vendor wants to sell:

Since the Industrial Revolution, the only way a company could scale up in productivity and profit was by treating customers as populations rather than as individuals—and by treating employees as positions on an organization chart rather than as unique sources of talent and ideas. Anything that stood in the way of larger scale tended to be dismissed.

The Internet has challenged that system by giving individuals the same power. Any of us can now communicate with anybody else, anywhere in the world, at costs close to zero. We can set up our own websites. We can produce, publish, syndicate and do other influential things, with global reach. Each of us can be valuable as unique individuals and not only as members of groups.

According to David Weinberger, the caption “Customer as God” was not something Searls wrote himself – it does look a bit over the top. But customer power is increasingly on the rise – though it has come much longer in the USA than it has in Europe, no matter how much legislation EU has as opposed to the USA. The wonders of competition and falling transaction costs…

The solution to American unemployment…

(Flash thought as I am listening to Erik Brynjolfsson and Andy McAfee talk about Race Against the Machine at the MIT Center for Digital Business research conference – an excellent event, by the way.)

The core issue identified in Race Against the Machine is that technology improves faster than humans. Consequently, a rising number of people get automated out of a job. Previously, that has not been a long-term problem, because new industries have sprung up to hire. Now, however, the new industries hire very few people (haven’t checked the facts, but someone said that Facebook, Google, Twitter and Amazon collectively have about 100,000 employees, which is the job growth needed per month to keep up with population growth in the US workforce.)

So – we need to find new areas where we can hire lots of people, to do jobs that, at least as of now cannot be automated.

Here is my tongue-in-cheek solution:

1. The US has a rising (or, perhaps, expanding) obesity problem.

2. Obesity is expensive, since obese people disproportionately consume health care.

3. Take all the unemployed, sort them into a) thin and b) thick.

4. Hire group a) to be personal coaches to group b).

5. Pay for it with the savings in health costs.

Great, job done. Now for some real work…

(On a serious note, first-line health care is probably an area that could consume a lot of workers. On the other hand, it will also experience many job losses – health care is vastly inefficient in the US now, primarily because it is so cumbersome to administrate and pay for.)

Update 5/24: I was wrong – personalized weight loss coaching is now available as an app.