Category Archives: Nerdy ruminations

Music nerding (well, procrastination)

What the heck, I am suffering from low productivity today anyway. So: I can heartily recommend Rick Beato‘s channel Everything Music if you are in need of distraction. He is. a music theorist and producer, first because Youtube famous with a video of his son having perfect pitch, and discusses all kinds of music theory. Most will like his lists of greatest guitar solos and so on, but I think his best video so far is this one, which was recorded, I see, the day before Eddie van Halen died:

Now, back to work, you hear?

The history of software engineering

grady_booch2c_chm_2011_2_cropped

The History of Software Engineering
an ACM webinar presentation by
ACM Fellow Grady Booch, Chief Scientist for Software Engineering, IBM Software
(PDF slides here.)

Note: These are notes taken while listening to this webinar. Errors, misunderstandings and misses aplenty…

(This is one of the perks of being a member of ACM – listening to legends of the industry talking about how it got started…)

Trust is fundamental – and we trust engineering because of licensing and certification. This is not true of software systems – and that leads us to software engineering. Checks and balances important – Hammurabi code of buildings, for instance. First licensed engineer was Charles Bellamy, in Wyoming, in 1907, largely because of former failures of bridges, boilers, dams, etc.

Systems engineering dates back to Bell labs, early 1940s, during WWII. In some states you can declare yourself a software engineer, in others licensing is required, perhaps because the industry is young. Computers were, in the beginning, human (mostly women). Stibitz coined digital around 1942, Tukey coined software in 1952. 1968-69 conference on software engineering coined the term, but CACM letter by Anthony Oettinger used the term in 1966, but the term was used before that (“systems software engineering”), most probably originated by Margaret Hamilton in 1963, working for Draper Labs.

Programming – art or science? Hopper, Dijkstra, Knuth, sees them as practical art, art, etc. Parnas distinguished between computer science and software engineering. Booch sees it as dealing with forces that are apparent when designing and building software systems. Good engineering based on discovery, invention, and implementation – and this has been the pattern of software engineering – dance between science and implementation.

Lovelace first programmer, algorithmic development. Boole and boolean algebra, implementing raw logic as “laws of thought”.

First computers were low cost assistants to astronomers, establishing rigorous processes for acting on data (Annie Cannon, Henrietta Leavitt.) Scaling of problems and automation towards the end of the 1800s – rows of (human) computers in a pipeline architecture. The Gilbreths created process charts (1921). Edith Clarke (1921) wrote about the process of programming. Mechanisation with punch cards (Gertrude Blanch, human computing, 1938; J Presper Eckert on punch car methods (1940), first methodology with pattern languages.

Digital methods coming – Stibitz, Von Neumann, Aitken, Goldstein, Grace Hopper with machine-independent programming in 1952, devising languages and independent algorithms. Colossus and Turing, Tommy Flowers on programmable computation, Dotthy du Boisson with workflow (primary operator of Colossus), Konrad Zuse on high order languages, first general purpose stored programs computer. ENIAC with plugboard programming, dominated by women, (Antonelli, Snyder, Spence, Teitelbaum, Wescoff). Towards the end of the war: Kilburn real-time (1948), Wilson and Gill subroutines (1949), Eckert and Mauchly with software as a thing of itself (1949). John Bacchus with imperative programming (Fortran, 1946), Goldstein and von Neumann flowcharts (1947). Commercial computers – Leo for a tea company in England. John Pinkerton creating operating system, Hoper with ALGOL and COBOL, reuse (Bener, Sammet). SAGE system important, command and control – Jay Forrester and Whirlwind 1951, Bob Evans (Sage, 1957), Strachey time sharing 1959, St Johnson with the first programming services company (1959).

Software crisis – not enough programmers around, machines more expensive than the humans, priesthood of programming, carry programs over and get results, batch. Fred Brooks on project management (1964), Constantin on modular programming (1968), Dijkstra on structured programming (1969). Formal systems (Hoare and Floyd) and provable programs; object orientation (Dahl and Nygaard, 1967). Main programming problem was complexity and productivity, hence software engineering (Margaret Hamilton) arguing that process should be managed.

Royce and the waterfall method (1970), Wirth on stepwise refinement, Parnas on information hiding, Liskov on abstract data types, Chen on entity-relationship modelling. First SW engineering methods: Ross, Constantine, Yourdon, Jackson, Demarco. Fagin on software inspection, Backus on functional programming, Lamport on distributed computing. Microcomputers made computing cheap – second generation of SW engineering: UML (Booch 1986), Rumbaugh, Jacobsen on use cases, standardization on UML in 1997, open source. Mellor, Yourdon, Worfs-Brock, Coad, Boehm, Basils, Cox, Mills, Humphrey (CMM), James Martin and John Zachman from the business side. Software engineering becomes a discipline with associations. Don Knuth (literate programming), Stallman on free software, Cooper on visual programming (visual basic).

Arpanet and Internet changed things again: Sutherland and SCRUM, Beck on eXtreme prorgamming, Fowler and refactoring, Royce on Rational Unified Process. Software architecture (Kruchten etc.), Reed Hastings (configuration management), Raymond on open source, Kaznik on outsourcing (first major contract between GE and India).

Mobile devices changed things again – Torvalds and git, Coplien and organiational patterns, Wing and computational thinking, Spolsky and stackoverflow, Robert Martin and clean code (2008). Consolidation into cloud: Shafer and Debois on devops (2008), context becoming important. Brad Cox and componentized structures, service-oriented architectures and APIs, Jeff Dean and platform computing, Jeff Bezos.

And here we are today: Ambient computing, systems are everywhere and surround us. Software-intensive systems are used all the time, trusted, and there we are. Computer science focused on physics and algorithms, software engineering on process, architecture, economics, organisation, HCI. SWEBOK first 2004, latest 2014, codification.

Mathematical -> Symbolic -> Personal -> Distributed & Connected -> Imagined Realities

Fundamentals -> Complexity -> HCI -> Scale -> Ethics and morals

Scale is important – risk and cost increases with size. Most SW development is like engineering a city, you have to change things in the presence of things that you can’t change and cannot change. AI changes things again – symbolic approaches and connectionist approaches, such as Deepmind. Still a lot we don’t know what to do – such as architecture for AI, little rigorous specification and testing. Orchestration of AI will change how we look at systems, teaching systems rather than programming them.

Fundamentals always apply: Abstraction, separation, responsibilities, simplicity. Process is iterative, incremental, continuous releases. Future: Orchestrating, architecture, edge/cloud, scale in the presence of untrusted components, dealing with the general product.

“Software is the invisible writing that whispers the stories of possibility to our hardware…” Software engineering allows us to build systems that are trusted.

Sources: https://twitter.com/Grady_Boochhttps://computingthehumanexperience.com/

Interesting interview with Rodney Brooks

sawyer_and_baxterBoingboing, which is a fantastic source of interesting stuff to do during Easter vacation, has a long and fascinating interview by Rob Reid with Rodney Brooks, AI and robotics researcher and entrepreneur extraordinaire. Among the things I learned:

  • What the Baxter robot really does well – interacting with humans and not requiring 1/10 mm precision, especially when learning
  • There are not enough workers in manufacturing (even in China), most of the ones working there spend their time waiting for some expensive capital equipment to finish
  • The automation infrastructure is really old, still using PLCs that refresh and develop really slowly
  • Robots will be important in health care – preserving people’s dignity by allowing them to drive and stay at home longer by having robots that understand force and softness and can do things such as help people out of bed.
  • He has written an excellent 2018 list of dated predictions on the evolution of robotic and AI technologies, highly readable, especially his discussions on how to predict technologies and that we tend to forget the starting points. (And I will add his blog to my Newsblur list.)
  • He certainly doesn’t think much of the trolley problem, but has a great example to understand the issue of what AI can do, based on what Isaac Newton would think if he were transported to our time and given a smartphone – he would assume that it would be able to light a candle, for instance.

Worth a listen..

Neural networks – explained

As mentioned here a few times, I teach an executive course called Analytics for strategic management, as well as a short program (three days) called Decisions from Data: Driving an Organization on Analytics. We have just finished the first version of both of these courses, and it has been a very enjoyable experience. The students (in both courses) have been interested and keen to learn, bringing relevant and interesting problems to the table, and we have managed do what it said on the tin (I think) – make them better consumers of analytics, capable of having a conversation with the analytics team, employing the right vocabulary and being able to ask more intelligent questions.

Of course, programs of this type does not allow you do dive deep into how things work, though we have been able to demonstrate MySQL, Python and DataRobot, and also give the students an understanding of how rapidly these things are evolving. We have talked about deep learning, for instance, but not how it works.

But that is easy to fix – almost everything about machine learning is available on Youtube and in other web channels, once you are into a little bit of the language. For instance, to understand how deep learning works, you can check out a series of videos from Grant Sanderson, who produces very good educational videos on the web site 3 blue one brown.

(There are follow-up videos: Chapter 2, Chapter 3, and Chapter 3 (formal calculus appendix). This Youtube channel has a lot of other math-related videos, too, including a great explanation of how Bitcoin works, which I’ll have to get into at some points, since I keep being asked why I don’t invest in Bitcoin all the time.)

Of course, you have to be rather interested to dive into this, and it certainly is not required read for an executive who only wants to be able to talk intelligently to the analytics team. But it is important (and a bit reassuring) to note the mechanisms employed: Breaking a very complex problem up into smaller problems, breaking those up into even smaller problems. solving the small problems by programming, then stepping back up. For those of you with high school math: It really isn’t that complicated. Just complicated in layers.

And it is good to know that all this advanced AI stuff really is rather basic math. Just applied in an increasingly complex way, really fast.

A tour de Fry of technology evolution

There are many things to say about Stephen Fry, but enough is to show this video, filmed at Nokia Bell Labs, explaining, amongst other things, the origin of microchips, the power of exponential growth, the adventure and consequences of performance and functionality evolution. I am beginning to think that “the apogee, the acme, the summit of human intelligence” might actually be Stephen himself:

(Of course, the most impressive feat is his easy banter on hard questions after the talk itself. Quotes like: “[and] who is to program any kind of moral [into computers ]… If [the computer] dives into the data lake and learns to swim, which is essentially what machine learning is, it’s just diving in and learning to swim, it may pick up some very unpleasant sewage.”)

Science fiction and the future

I am on the editorial board of ACM Ubiquity – and we are in the middle of a discussion of whether science fiction authors get things right or not, and whether science fiction is a useful predictor of the future. I must admit I am not a huge fan of science fiction – definitely not films, which tend to contain way too many scenes of people in tights staring at screens. But I do have some affinity for the more intellectual variety which tries to say something about our time by taking a single element of it and magnifying it.

So herewith, a list of technology-based science fiction short stories available on the web, a bit of fantasy in a world where worrying about the future impact of technology is becoming a public sport:

  • The machine stops by E. M. Forster is a classic about what happens when we make ourselves completely dependent on a (largely invisible) technology. Something to think about when you sit surfing and video conferencing  in your home office. First published in 1909, which is more than impressive.
  • The second variety by Philip K. Dick is about what happens when we develop self-organizing weapons systems – a future where warrior drones take over. Written as an extension of the cold war, but in a time where you can deliver a hand grenade with a drone bought for almost nothing at Amazon and remote-controlled wars initially may seem bloodless it behooves us to think ahead.
  • Jipi and the paranoid chip is a brilliant short story by Neal Stephenson – the only science-fiction author I read regularly (though much of what he writes is more historic/technothrillers than science fiction per se). The Jipi story is about what happens when technologies develop a sense of self and self preservation.
  • Captive audience by Ann Warren Griffith is perhaps not as well written as the others, but still: It is a about a society where we are not allowed not to watch commercials. And that should be scary enough for anyone bone tired of alle the intrusive ads popping up everywhere we go.

There is another one I would have liked to have on the list, but I can’t remember the title or the author. It is about a man living in a world where durable products are not allowed – everything breaks down after a certain time so that the economy is maintained because everyone has to buy new things all the time. The man is trying to smuggle home a wooden garden bench made for his wife out of materials that won’t degrade, but has trouble with a crumbling infrastructure and the wrapping paper dissolving unless he gets home soon enough…

The reassembler

James May – Captain Slow, the butt of many Top Gear jokes about nerds and pedants – has a fantastic little show called The Reassembler, where he takes some product that has been taken apart into little pieces, and puts it together again. It works surprisingly well, especially when he goes off on tangents about corporate history, kids waiting for their birthdays to come, and whether something is a bolt or a screw.

Slow television, nerd style.

Here is one example, you can find others on Youtube:

Singularity redux

From Danny Hillis: The Pattern on the Stone, which I am currently reading hunting for simple explanations of technological things:

Because computers can do some things that seem very much like human thinking, people often worry that they are threatening our unique position as rational beings, and there are some who seek reassurance in mathematical proofs of the limits of computers. There have been analogous controversies in human history. It was once considered important that the Earth be at the center of the universe, and our imagined position at the center was emblematic of our worth. The discovery that we occupied no central position – that  our planet was just one of a number of planets in orbit around the Sun – was deeply disturbing to many people at the time, and the philosophical implications of astronomy became a topic of heated debate. A similar controversy arose over evolutionary theory, which also appeared as a threat to humankind’s uniqueness. At the root of these earlier philosophical rises was a misplaced judgment of the source of human worth. I am convinced that most of the current philosophical discussions about the limits of computers are based on a similar misjudgment.

And that, I think, is one way to think about the future and intelligence, natural and artificial. Works for me, for now. No idea, of course, whether this still is Danny’s position, but I rather think it is.

Friday futuristic reading

I am not a big fan of science fiction – way too many people in tights staring at big screens – but I do like the more intellectual variety where the author tries to say something about today’s world, often by taking a single aspect of it and expanding it. So here is a short list of technology-based short stories, freely available on the interwebs, a bit of reading for anyone who feel they live in a world where the technology is taking over more and more:

  • The machine stops by E. M. Forster is the classic on what happens when we make ourselves totally dependent on mediating technology. Something to think about when you surf and Skype from your home office. Written in 1909, which is more than impressive.
  • The second variety by Philip K. Dick details a future with self-organizing weapon systems, a future where the drones take over. Written during the Cold War, but in a time where warfare is increasingly remote and apparently bloodless there is reason to think about how to enforce the “laws of robotics“.
  • Jipi and the paranoid chip is a brilliant short story by Neal Stephenson, the only sci-fi writer I read regularly (though much of what he writes is historic techno fiction, perhaps fantasy, and not sci-fi per se). It is about what happens if technology becomes self-aware.
  • Captive audience by Ann Warren Griffith is perhaps not as well written, but I like the idea: What happens in a society where we are no longer allowed to block advertising, where AdBlock Plus is theft.

There is another short story I would have liked to include, but i can’t remember the title or author. I think it was about a society where everything is designed with planned obsolescence, where a man is trying to smuggle home an artisanal (and hence, sustainable) wooden bench, but has issues with various products, including the gift wrap, which decays rapidly once it has reached its “best before” time stamp…

And with that, back to something more work-related. Have a great weekend!

The Facebook method of dealing with complexity

Computer systems used to be weak, so we had to make their world simple and standardized. They now can handle almost endless complexity—but we still need to understand how to make the world simple, so we don’t risk burdening the majority of users with the needless complexity of the few. One way of doing this is to adopt Facebook’s approach of “Yes, No and It’s Complicated.”

Read the rest of the essay at ACM Ubiquity’s blog.

An oratory masterclass

“We no longer think the world will be saved by politics and rock’n roll. We now believe it will be saved by the life of mind.” “…playing gracefully with ideas.”

Watch this. If nothing else, study Stephen Fry’s technique.

Unfortunately, I own a lawnmover. Oh well.

(There is a Q&A session as well, available as separate videos.)

R – the swiss army knife of the data scientist

R LogoThe video below, a talk by John D. Cook (via Flowingdata), is a very nice intro to R for the someone who wants to be a data scientist and have some notion or experience of programming. I have been beginning to look at R, but need a specific project to analyze in order to get into it. When learning a programming language (or any powerful tool, for that matter) it is important to get under the skin of it, to understand it to the point where you don’t look up the function or whatever in the manual because you intuitively know what it would be named, since you think like the developers. (I can’t claim any knowledge like that, except perhaps for IFPS (a defunct financial programming language), REXX (macro language for IBM mainframes), and Minitab (statistical package, rather marginalized now). Learning something to that level requires time and, most importantly, a need. We’ll see.

But it helps to have someone explain things, so I guess watching this video is not a waste of time. It wasn’t for me, anyway. And R certainly is the thing to learn, in this Big Data (whatever that may mean) world. (Though, as is said here, it was never designed for huge data sets. But huge data sets need models to work, and you build those on small data sets…)

The Double from toy to tool

There is a lot of writing about how computers (in this case, referred to as “robots”) will take over the role of the teacher (as well as many other jobs) these days. I have my own robot, from Double Robotics, and it is gradually becoming a tool rather than a toy – and it allows me to extend my reach, rather than automate my job. Granted, so far it has mainly been used for experiments and demonstrations (below, a picture from a meeting of an ICT industry organisation in Norway) but better access and a few technology upgrades have made it much more reliable and gradually into something that is useful rather than just fun.

The practical issues with the Double have been numerous, but have largely been solved. Here they are:

  • The Double required a lot of manual intervention before I could use it – specifically, it was in my office, and the department administrator would have to unlock my office and unplug it to let it out. This was solved by acquiring a docking station and positioning the Double out in the public area of my department (next to the mailboxes, as it happens.) I was worried that someone would make away with it (or steal the iPad) but both are password protected and besides, the department requires an ID card to get in. This has also meant that other department members can use the Double – one colleague has severe allergies and has to go to his mountain cabin for several weeks in the spring every year, he used the Double to attend seminars.
  • The speaker and microphone did not work well. Out of the box, the Double uses the speaker and mike from the iPad. The speaker was too weak, and the iPad microphone picks up too much noise as well as conversations all around rather than what is in front of you. Initially, I solved the speaker problem by using a Bluetooth speaker, but it was on a separate charger and did not work very well. Double Robotics came up with an Audio Kit upgrade which largely has solved the problem – I can now generate enough clear sound that I can use the Double for lecturing, and the directional mike filters out some of the noise and ambient conversations to make communication much more natural.
  • Thirdly, the iPad will sometimes go offline because of interruptions, chiefly because of software updates. This means it will not be able to set up a connection, and needs a manual restart. This was fixed by running the Double app in a Guided Access mode (found under Settings>General>Accessibility>Guided Access), a way of setting the iPad to only run one app only, uninterrupted by software upgrades, messages and other distractions.
  • Fourth, the sound sometimes disappears on the iPad altogether. This may actually be a physical problem (it has been banged about a bit, and the metal part behind the sound buttons is a weak spot), but was fixed by allowing the physical sound controls to be run in Guided Access mode, and then asking whoever I was talking to to turn up the sound if necessary.
  • Fifth and last, the wi-fi connection drops for about 30 seconds every time I go from one wireless router to the next, which happens all the time in our large office building. I solved this by using the cell connection instead. It still has dead spots some places in the building (despite our telecom vendor, NetCom, being headquartered very close to us) but I am beginning to know where they are. It is also solvable by setting up a VLAN, something that requires cooperation from the IT department and which I haven’t gotten around to quite yet.

All in all, I am beginning to find the Double a useful tool. Next time I am invited to speak on TV, I’ll consider sending it down to the studio in a taxi, just to see the reaction. Like many digital solutions, true productivity does not come until everything is digital – for instance, i wanted to use the Double for an early meeting with students last week, but found I couldn’t do it because the door to the department would be locked and there was no way I could unlock it remotely. So I ended up going in for the FTF meeting anyway, even if it was the only thing I needed to be in the office for that day.

A second observation is that the Double elicits all kinds of thoughtful (and less thoughtful) comments from grown-ups, mainly along the lines of how surprisingly natural this is but how traditional face to face is better, alienation etc. etc. The younger element takes to it naturally – my cousin’s eight year old daughter, seeing her Dad in the Double, responded with a “Hi Dad” as the most natural thing in the world.

And thirdly – one obvious use of the Double would be to ship it to wherever I am supposed to be, so I can give a talk remotely. I gave a talk in Bordeaux two days ago. Bordeaux is complicated to get to from Oslo, and the trip ended up taking three days. I could have sent the Double, but a) I think my physical presence helped the talk, and b) the Double has a large lithium-ion battery, and you can’t ship those on airplanes. Consequently, the Double is a tool for making me stay in place while moving about, rather than the other way around.

ACM Ubiquity’s Singularity Symposium

ACM Ubiquity, of which I have the honor of serving as an Associate Editor for a number of years, has a symposium (a collection of essays around a theme) on the Technological Singularity. I have been the editor responsible for this one, and the essays are as follows (I’ll make these live links as they are published):

  1. Opening Statement by Espen Andersen
  2. The Singularity and the State of the Art in Artificial Intelligence by Ernest Davis, NYU
  3. Human Enhancement—The Way Ahead by Kevin Warwick, University of Reading
  4. Exponential Technology and The Singularity by Peter Cochrane
  5. Computers versus Humanity: Do we compete? by Liah Greenfeld and Mark Simes, Boston University
  6. What About an Unintelligent Singularity? by Peter J. Denning, Naval Postgraduate School, editor ACM Ubiqity
  7. Closing Statement: Reflections on A Singularity Symposium by Espen Andersen

You can read about the background for the symposium in my opening statement – but, in short, I could not get a clear and concise explanation of whether the singularity will happen (and when), so I set about getting a number of smart people to give their perspective. Enjoy!

Being rational in the publishing debate

Book publishing is moving towards subscription models, and the tempers are (predictably) flaring, especially since writers cannot change their business model to giving performances rather than selling their works for self-consumption, as musicians can (and do).

malletJohn Scalzi, sci-fi writer and blogger par excellence (he works his comment field using what he terms his Mallet of Loving Correction, which is also the title of his blog-generated book) has a reasoned response to the current subscription-or-not discussion, which I encourage everyone to read. Key phrase:

[…] every new distribution model offers opportunities tuned to that particular model of distribution — the question is whether one is smart enough to figure out what the strengths of any distribution model are, and then saavy (and lucky) enough to capitalize on them.

And there you are. Easier ways to publish will lead to more writing – it already does. It will also create new ways of making a living from writing – and, I suspect, new forms of writing (as it already has.) In the process, some will prosper that previously didn’t, others won’t. Digging oneself into a trench certainly won’t help.

Moon landing hoax rebuttal

For some reason, many people with very little brains believe the moon landings of Apollo 11 and others were faked in a giant conspiracy. This video shows why they could not be faked, and why it matters. (Another point: At no point did the Soviet Union dispute that the USA had made it to the moon – because they had their own manned missions and could see the landing site with their own eyes….)

Anyway, a brilliant piece of argumentation, for your enjoyment:

Elon, I want my data!

Last week I got a parking ticket. I stopped outside BI Norwegian Business School where I work, to run in and deliver some papers and pick up some computer equipment. There is a spot outside the school where you can stop for 10 minutes for deliveries. When I came out, I had a ticket, the attendant was nowhere in sight – and I am pretty sure I had not been there for 10 minutes. But how to prove that?

Then it dawned on me – I have a Tesla Model S, a very innovative car – not just because it is electric, but because it is constantly connected to the Internet and sold more as a service than a product (actually, sold as a very tight, proprietary-architecture product, much like whatever Apple is selling). Given that there is a great app where I can see the where the car is and how fast it is going, I should be able to get the log from Tesla and prove that I parked the car outside BI less than 10 minutes before the ticket was issued…

Well, not so fast. I called Tesla Norway and asked to see the log, and was politely turned down – they cannot give me the data (actually, they will not hand it over unless there is a court order, according to company policy.) A few emails back and forth have revealed that the location and speed data seen by the app is not kept by the internal system. But you can still find out what kind of driving has been done – as Elon Musk himself did when refuting a New York Times journalist’s bad review by showing that the journalist had driven the car harder and in different places than claimed. I could, for instance, use the data to find out precisely when I parked the car, even though I can’t show the location.

And this is where it gets interesting (and where I stop caring about the parking ticket and start caring about principles): Norway has a Personal Data Protection Act, which dictates that if a company is saving data about you, they not only have to tell you what they save, but you also have a “right of inspection” (something I confirmed with a quick call to the Norwegian Data Protection Authority). Furthermore, I am vice chairman of Digitalt Personvern,  an association working to repeal the EU data retention directive and know some of the best data privacy lawyers in Norway.

So I can probably set in motion a campaign to force Tesla Norway to give me access to my data, based on Norwegian law. Tesla’s policies may be American, but their Norwegian subsidiary has to obey Norwegian laws.

But I think I have a better idea: Why not, simply, ask Tesla to give me the data – not because I have a right to data generated by myself according to Norwegian law, but because it is a good business idea and also the Right Thing to do?

So, Elon Musk: Why not give us Tesla-owners direct access to our logs through the web site? We already have password-protected accounts there, storing documents and service information. I am sure some enterprising developer (come to think of it, I know a few myself, some with Teslas) will come up with some really cool and useful stuff to make use of the information, either as independent apps or via some sort of social media data pooling arrangement. While you are at it, how about an API?

Tesla has already shown that they understand business models and network externalities by doing such smart things as opening up their patent portfolio. The company is demonstrably nerdy – the stereo volume literally goes to 11. Now it is time to open up the data side – to make the car even more useful and personable.

PS: While I have your attention, could you please link the GPS to the pneumatic suspension, so I can set the car to automatically increase road clearance when I exit the highway onto the speed-bumpy road to my house? Being able to take snapshots with the reverse camera would be a nice hack as well, come to think of it. Thanks in advance! (And thanks for the Rdio, incidentally!)

Update a few hours later: Now on Boingboing!

Update Sept. 2: The parking company (Europark) dropped the ticket – didn’t give a reason, but probably not because I was parked too long but because I was making a delivery and could park there.

The disrupted history professor

Jill Lepore, Harvard HistorianProfessor Jill Lepore, chair of Harvard’s History and Literature program, has published an essay in the New Yorker, sharply critical of Clayton Christensen and his theory of disruptive innovations. The essay has generated quite some stir, including a rather head-shaking analysis by Will Oremus in Slate.

I find Lepore’s essay rather puzzling, and, quite frankly, unworthy of a professor of history, Harvard or not. At this point, I should say that I am not an unbiased observer here – clayClay is a personal friend of mine, we went through the doctoral program at Harvard Business School together (he started a year before me), he was on my thesis committee (having graduated three years ahead of me) and we have kept in touch, including him coming to Norway for a few visits and one family vacation including a great trip on Hurtigruten. Clay is commonly known as the “gentle giant” and one of the most considerate, open and thoughtful people I know, and seeing him subjected to vituperating commentary from morons quite frankly pains me.

Professor Lepore’s essay has one very valid point: Like any management idea, disruptive innovation is overapplied, with every technology company or web startup claiming that their offering is disruptive and therefore investment-worthy. As I previously have written: If a product is described as disruptive, it probably isn’t. A disruptive product is something your customers don’t care about, with worse performance than what you have, and with lower profit expectations. Why in the world would you want to describe your offering as disruptive?

That being said, professor Lepore’s (I will not call her Jill, because that seems to be a big issue for some people. But since I have met Clay (most recently last week, actually), I will refer to him as Clay)  essay shows some remarkable jumps to non-conclusions: She starts out with a very fine summary of what the theory of disruption says:

Christensen was interested in why companies fail. In his 1997 book, “The Innovator’s Dilemma,” he argued that, very often, it isn’t because their executives made bad decisions but because they made good decisions, the same kind of good decisions that had made those companies successful for decades. (The “innovator’s dilemma” is that “doing the right thing is the wrong thing.”) As Christensen saw it, the problem was the velocity of history, and it wasn’t so much a problem as a missed opportunity, like a plane that takes off without you, except that you didn’t even know there was a plane, and had wandered onto the airfield, which you thought was a meadow, and the plane ran you over during takeoff. Manufacturers of mainframe computers made good decisions about making and selling mainframe computers and devising important refinements to them in their R. & D. departments—“sustaining innovations,” Christensen called them—but, busy pleasing their mainframe customers, one tinker at a time, they missed what an entirely untapped customer wanted, personal computers, the market for which was created by what Christensen called “disruptive innovation”: the selling of a cheaper, poorer-quality product that initially reaches less profitable customers but eventually takes over and devours an entire industry.

She then goes on to say that the theory is mis- and overapplied, and I (and certainly Clay) couldn’t agree more. Everyone and their brother is on an innovation bandwagon and way too many consulting companies are peddling disruption just like they were peddling business process reengineering back in the nineties (I worked for CSC Index and caught the tail end of that mania. Following this, she points out that Clay’s work is based on cases (it is), is theory-building rather than theory-confirming (yep) and that you can find plenty of cases of things that were meant to be disruptive that weren’t, or companies that were disruptive but still didn’t succeed. All very well, though, I should say, much of this is addressed in Clay’s later books and various publications, including a special issue of Journal of Product Innovation Management.

(Curiously, she mentions that she has worked as an assistant to Michael Porter‘s assistant, apparently having a good time and seeing him as a real professor. She then goes on to criticise the theory of disruptive innovation as having no predictive power – but the framework that Porter is most famous for, the five forces, has no predictive power either: It is a very good way to describe the competitive situation in an industry by offers zero guidance as to what you actually should do if you are, say, in the airline industry, which scores very badly on all five dimensions. There is a current controversy between Clay and Michael Porter on where the Harvard Business School (and, by implication, business education in general) should go. The controversy is, according to Clay, mostly “ginned up” in order to make the Times article interesting, but I do wonder what professor Lepore’s stakes are here.)

The trouble with management ideas is that while they can be easily dismissed when commoditized and overapplied, most of them actually start out as very good ideas within their bounds. Lepore feels threatened by innovation, especially the disruptive kind, because it shows up both in her journalistic (she is a staff writer with the New Yorker) and academic career. I happen to think that the framework fits rather well in the newspaper industry, but then again, I have spent a lot of time with Schibsted, the only media company in the world that has managed to make it through the digital transition with top- and bottom-line growth, largely by applying Clay’s ideas. But for Lepore, innovation is a problem because it is a) unopposed by intellectuals, b) happening too fast, without giving said intellectuals time to think, and c) done by the wrong kind of people (that is, youngsters slouching on sofas, doing little work since most of their attention is spent on their insanely complicated coffee machines, which “look like dollhouse-size factories”.) I am reminded of “In the beginning…was the command line.”, Neal Stephenson‘s beautiful essay about technology and culture, where he points out that in

… the heyday of Communism and Socialism, [the] bourgeoisie were hated from both ends: by the proles, because they had all the money, and by the intelligentsia, because of their tendency to spend it on lawn ornaments.

And then Lepore turns bizarre, saying that disruptive innovation does not apply in journalism (and, by extention, academia) because “that doesn’t make them industries, which turn things into commodities and sell them for gain.” Apparently, newspapers and academia should be exempt from economic laws because, well, because they should. (I have had quite a few discussions with Norwegian publishing executives, who seem to think so for their industry, too.)

I think newspapers and academic institutions are industries – knowledge industries, operating in a knowledge economy, where things are very much turned into commodities these days, by rapidly advancing technology for generating, storing, finding and communicating information. The increased productivity of knowledge generation will mean that we will need fewer, but better, knowledge institutions. Some of the old ones will survive, even prosper. Some will be disrupted. Treating disruptive innovation as a myth certainly is one option, but I wish professor Lepore would base that decision on something more than what appears to be rhetorical comments, a not very careful reading of the real literature, and, quite frankly, wishful thinking.

But I guess time – if not the Times – will show us what happens in the future. As for disruption, I would rather be the disruptor than the disruptee. I would have less money and honor, but more fun. And I would get to write the epitaph.

But then again, I have an insanely complicated coffee machine. And now it is time to go and clean it.