Category Archives: Technology strategy

Getting dialogue online

Bank in the nineties, I facilitated a meeting with Frank Elter at a Telenor video meeting room in Oslo. There were about 8 participants, and an invited presenter: Tom Malone from MIT.

The way it was set up, we first saw a one hour long video Tom had created, where he gave a talk and showed some videos about new ways of organizing work (one of the more memorable sequences was (a shortened version of) the four-hour house video.) After seeing Tom’s video, we spent about one hour discussing some of the questions Tom had raised in the video. Then Tom came on from a video conferencing studio in Cambridge, Massachusetts, to discuss with the participants.

The interesting thing, to me, was that the participants experienced this meeting as “three hours with Tom Malone”. Tom experienced it as a one hour discussion with very interested and extremely well prepared participants.

A win-win, in other words.

I was trying for something similar yesterday, guest lecturing in Lene Pettersen‘s course at the University of Oslo, using Zoom with early entry, chat, polling and all video/audio enabled for all participants. This was the first videoconference lecture for the students and for three of my colleagues, who joined in. In preparation, the students had read some book chapters and articles and watched my video on technology evolution and disruptive innovations.

For the two hour session, I had set up this driving plan (starting at 2 pm, or 14:00 as we say over here in Europe…):

Image may contain: Espen Andersen, eyeglasses

Leading the discussion. Zoom allows you to show a virtual background, so I chose a picture of the office I would have liked to have…

14:00 – 14:15 Checking in, fiddling with the equipment and making sure everything worked. (First time for many of the users, so have the show up early so technical issues don’t eat into the teaching time.)
14:15 – 14:25 Lene introduces the class, talks about the rest of the course and turns over to Espen (we also encouraged the students to enter questions they wanted addressed in the chat during this piece)
14:25 – 14:35 Espen talking about disruption and technology-driven strategies.
14:35 – 14:55 Students into breakout rooms – discussing whether video what it would take for video and digital delivery to be a disruptive innovation for universities. (Breaking students up into 8 rooms of four participants, asking them to nominate a spokesperson to take notes and paste them into the chat when they return, and to discuss the specific question: What needs to happen for COVID-19 to cause a disruption of universities, and how would such a disruption play out?
14:55 – 15:15 Return to main room, Espen sums up a little bit, and calls on spokesperson from each group (3 out of 8 groups) based on the notes posted in the chat (which everyone can see). Espen talks about the Finn.no case and raises the next discussion question.
15:15 – 15:35 Breakout rooms, students discuss the next question: What needs to happen for DNB (Norway’s largest bank) to become a data-driven, experiment-oriented organization like Finn.no? What are the most important obstacles and how should they be dealt with?
15:35 – 15:55 Espen sums up the discussion, calling on some students based on the posts in the chat, sums up.
15:55 – 16:00 Espen hand back to Lene, who sums up. After 16:00, we stayed on with colleagues and some of the students to discuss the experience.

zoom dashboard

The dashboard as I saw it. Student names obscured.

Some reflections (some of these are rather technical, but they are notes to myself):

  • Not using Powerpoint or a shared screen is important. Running Zoom in Gallery view (I had set it up so you could see up to 49 at the same time) and having the students log in to Zoom and upload a picture gave a feeling of community. Screen and/or presentation sharing breaks the flow for everyone – When you do it in Zoom, the screen reconfigures (as it does when you come back from a breakout room) and you have to reestablish the participant panel and the chat floater. Instead, using polls and discussion questions and results communicated through the chat was easier for everyone (and way less complicated).
  • No photo description available.

    Satisfactory results, I would say.

    I used polls on three occasions: Before each discussion breakout, and in the end to ask the students what the experience was like. They were very happy about it and had good pointers on how to make it better

  • We had no performance issues and rock-steady connection the whole way through.
  • It should be noted that the program is one of the most selective in Norway and the students are highly motivated and very good. During the breakout sessions I jumped into each room to listen in on the discussion (learned that it was best to pause recording to avoid a voice saying “This session is being recorded” as I entered. The students were actively discussing in every group, with my colleagues (Bendik, Lene, and Katja) also participating. I had kept the groups to four participants, based on feedback from a session last week, where the students had been 6-7 and had issues with people speaking over each other.
  • Having a carefully written driving plan was important, but still, it was a very intense experience, I was quite exhausted afterwards. My advice on not teaching alone stands – in this case, I was the only one with experience, but that will change very fast. But I kept feeling rushed and would have liked more time, especially in the summary sections, would have liked to bring more students in to talk.
  • I did have a few breaks myself – during the breakout sessions – to go to the bathroom and replenish my coffee – but failed to allow for breaks for the students. I assume they managed to sneak out when necessary (hiding behind a still picture), but next time, I will explicitly have breaks, perhaps suggest a five minute break in the transition from main room to breakout rooms.

Conclusion: This can work very well, but I think it is important to set up each video session based on what you want to use it for: To present something, to run an exercise, to facilitate interaction. With a small student group like this, I think interaction worked very well, but it requires a lot of presentation. You have to be extremely conscious of time – I seriously think that any two-hour classroom session needs to be rescheduled to a three hour session just because the interaction is slower, and you need to have breaks.

As Winston Churchill almost said (he said a lot, didn’t he): We make our tools, and then our tools make us. We now have the tools, it will be interesting to see how the second part of this transition plays out.

A teaching video – with some reflections

Last Thursday, I was supposed to teach a class on technology strategy for a bachelor program at the University of Oslo. That class has been delayed for a week and (obviously) moved online. I thought about doing it video conference, but why not make a video, ask the students to see it before class? Then I can run the class interactively, discussing the readings and the video rather than spending my time talking into a screen. Recording a video is more work, but the result is reusable in other contexts, which is why I did it in English, not Norwegian. The result is here:

To my teaching colleagues: The stuff in the middle is probably not interesting – see the first two and the last five minutes for pointers to teaching and video editing.

For the rest, here is a short table of contents (with approximate time stamps):

  • 0:00 – 2:00 Intro, some details about recording the video etc.
  • 2:00 – 27:30 Why technology evolution is important, and an overview of technology innovation/evolution processes
    • 6:00 – 9:45 Standard engineering
    • 9:45 – 12:50 Invention
    • 12:50 – 15:50 Structural deepening
    • 15:50 – 17:00  Emerging (general) technology
      • 17:00 – 19:45 Substitution
      • 19:45 – 25:00 Expansion, including dominant design
      • 25:00 – 27:30 Structuration
  • 27:30 – 31:30 Architectural innovation (technology phases)
  • 31:30 –  31:45 BREAK! (Stop the video and get some coffee…)
  • 31:45 – 49:40 Disruption
    • 31:45 – 38:05 Introduction and theory
    • 38:05 – 44:00 Excavator example
    • 44:00 – 46:00 Hairdresser example
    • 47:00 – 47:35 Characteristics of disruptive innovations
    • 47:35 – 49:40 Defensive strategies
  • 49:40 – 53:00 Things take time – production and teaching…
  • 53:00 – 54:30 Fun stuff

This is not the first time I have recorded videos, by any means, but it is the first time I have created one for “serious” use, where I try to edit it to be reasonably professional. Some reflections on the process:

  • This is a talk I have given many times, so I did not need to prepare the content much – mainly select some slides. for a normal course, I would use two-three hours to go through the first 30 minutes of this video – I use much deeper examples and interact with the students, have them come up with other examples and so on. The disruption part typically takes 1-2 hours, plus at least one hour on a specific case (such as the steel production). Now the format forces me into straight presentation, as well as a lot of simplification – perhaps too much. I aim to focus on some specifics in the discussion with the students.
  • I find that I say lots of things wrong, skip some important points, forget to put emphasis on other points. That is irritating, but this is straight recording, not a documentary, where I would storyboard things, film everything in short snippets, use videos more, and think about every second. I wanted to do this quickly, and then I just have to learn not to be irritated at small details.
  • That being said, this is a major time sink. The video is about 55 minutes long. Recording took about two hours (including a lot of fiddling with equipment and a couple of breaks). Editing the first 30 minutes of the  video took two hours, another hour and a half for the disruption part (mainly because by then I was tired, said a number of unnecessary things that I had to remove.)
  • Using the iPad to be able to draw turned out not to be very helpful in this case, it complicated things quite a bit. Apple’s SideCar is still a bit unpredictable, and for changing the slides or the little drawing on the slides I did, a mouse would have been enough.
  • Having my daughter as audience helps, until I have trained myself to look constantly into the camera. Taping a picture of her or another family member to the camera would probably work almost as well, with practice. (She has heard all my stories before…)
  • When recording with a smartphone, put it in flight mode so you don’t get phone calls while recording (as I did.) Incidentally, there are apps out there that allow you to use the iPhone as a camera connected to the PC with a cable, but I have not tested them. It is easy to transfer the video with AirPlay, anyway.
  • The sound is recorded in two microphones (the iPhone and a Røde wireless mic.) I found that it got “fatter” if I used both the tracks, so I did that, but it does sometime screw up the preview function in Camtasia (though not the finished product). That would also have captured both my voice and my daughter’s (though she did not ask any questions during the recording, except on the outtakes.)
  • One great aspect of recording a video is that you can fix errors – just pause and repeat whatever you were going to say, and the cut it in editing. I also used video overlays to correct errors in some slides, and annotations to correct when I said anything wrong (such as repeatedly saying “functional deepening” instead of “structural deepening”.) It does take, time, however…

My excellent colleague Ragnvald Sannes pointed out that this is indicative of how teaching will work in the future, from a work (and remuneration) perspective. We will spend much more time making the content, and less time giving it. This, at the very least, means that teachers can no longer be paid based on the number of hours spent teaching – or that we need to redefine what teaching means…

Practical business development

I have come to learn that there are no boring industries – one always finds something interesting in what at first may looks fairly mundane. And that is something I am trying to teach my students, as well.

Andrew Camarata is a young man who works for himself with excavators, bulldozers, gravel, stone, earthworks and so on. He lives and runs his business in the Hudson Valley just south of Albany, New York and in the winter he does, among a lot of other things, snow plowing.

In this video, he will tell you almost everything there is to know about how to plow snow commercially in rural United States and make money from it.

The interesting point about this video (and a lot of other videos he has made, he has a great following on Youtube) is that he provides a very thorough understanding of business design: In the video, he talks about acquiring and maintaining resources, understanding customers (some are easy, others difficult, you need to deal with both), administration and budgeting, ethics (when to plow, when not to), and risk reduction (add the most complicated jobs with the greatest risk of destroying equipment last in the job queue, to reduce the consequences of breakdowns).

For a business student, this is not a bad introduction to business, and Camarata is certainly a competent businessman. In fact, I see nothing here that is not applicable in any industry.

When it also comes in a pedagogically and visually excellent package, what’s not to like?

The history of software engineering

grady_booch2c_chm_2011_2_cropped

The History of Software Engineering
an ACM webinar presentation by
ACM Fellow Grady Booch, Chief Scientist for Software Engineering, IBM Software
(PDF slides here.)

Note: These are notes taken while listening to this webinar. Errors, misunderstandings and misses aplenty…

(This is one of the perks of being a member of ACM – listening to legends of the industry talking about how it got started…)

Trust is fundamental – and we trust engineering because of licensing and certification. This is not true of software systems – and that leads us to software engineering. Checks and balances important – Hammurabi code of buildings, for instance. First licensed engineer was Charles Bellamy, in Wyoming, in 1907, largely because of former failures of bridges, boilers, dams, etc.

Systems engineering dates back to Bell labs, early 1940s, during WWII. In some states you can declare yourself a software engineer, in others licensing is required, perhaps because the industry is young. Computers were, in the beginning, human (mostly women). Stibitz coined digital around 1942, Tukey coined software in 1952. 1968-69 conference on software engineering coined the term, but CACM letter by Anthony Oettinger used the term in 1966, but the term was used before that (“systems software engineering”), most probably originated by Margaret Hamilton in 1963, working for Draper Labs.

Programming – art or science? Hopper, Dijkstra, Knuth, sees them as practical art, art, etc. Parnas distinguished between computer science and software engineering. Booch sees it as dealing with forces that are apparent when designing and building software systems. Good engineering based on discovery, invention, and implementation – and this has been the pattern of software engineering – dance between science and implementation.

Lovelace first programmer, algorithmic development. Boole and boolean algebra, implementing raw logic as “laws of thought”.

First computers were low cost assistants to astronomers, establishing rigorous processes for acting on data (Annie Cannon, Henrietta Leavitt.) Scaling of problems and automation towards the end of the 1800s – rows of (human) computers in a pipeline architecture. The Gilbreths created process charts (1921). Edith Clarke (1921) wrote about the process of programming. Mechanisation with punch cards (Gertrude Blanch, human computing, 1938; J Presper Eckert on punch car methods (1940), first methodology with pattern languages.

Digital methods coming – Stibitz, Von Neumann, Aitken, Goldstein, Grace Hopper with machine-independent programming in 1952, devising languages and independent algorithms. Colossus and Turing, Tommy Flowers on programmable computation, Dotthy du Boisson with workflow (primary operator of Colossus), Konrad Zuse on high order languages, first general purpose stored programs computer. ENIAC with plugboard programming, dominated by women, (Antonelli, Snyder, Spence, Teitelbaum, Wescoff). Towards the end of the war: Kilburn real-time (1948), Wilson and Gill subroutines (1949), Eckert and Mauchly with software as a thing of itself (1949). John Bacchus with imperative programming (Fortran, 1946), Goldstein and von Neumann flowcharts (1947). Commercial computers – Leo for a tea company in England. John Pinkerton creating operating system, Hoper with ALGOL and COBOL, reuse (Bener, Sammet). SAGE system important, command and control – Jay Forrester and Whirlwind 1951, Bob Evans (Sage, 1957), Strachey time sharing 1959, St Johnson with the first programming services company (1959).

Software crisis – not enough programmers around, machines more expensive than the humans, priesthood of programming, carry programs over and get results, batch. Fred Brooks on project management (1964), Constantin on modular programming (1968), Dijkstra on structured programming (1969). Formal systems (Hoare and Floyd) and provable programs; object orientation (Dahl and Nygaard, 1967). Main programming problem was complexity and productivity, hence software engineering (Margaret Hamilton) arguing that process should be managed.

Royce and the waterfall method (1970), Wirth on stepwise refinement, Parnas on information hiding, Liskov on abstract data types, Chen on entity-relationship modelling. First SW engineering methods: Ross, Constantine, Yourdon, Jackson, Demarco. Fagin on software inspection, Backus on functional programming, Lamport on distributed computing. Microcomputers made computing cheap – second generation of SW engineering: UML (Booch 1986), Rumbaugh, Jacobsen on use cases, standardization on UML in 1997, open source. Mellor, Yourdon, Worfs-Brock, Coad, Boehm, Basils, Cox, Mills, Humphrey (CMM), James Martin and John Zachman from the business side. Software engineering becomes a discipline with associations. Don Knuth (literate programming), Stallman on free software, Cooper on visual programming (visual basic).

Arpanet and Internet changed things again: Sutherland and SCRUM, Beck on eXtreme prorgamming, Fowler and refactoring, Royce on Rational Unified Process. Software architecture (Kruchten etc.), Reed Hastings (configuration management), Raymond on open source, Kaznik on outsourcing (first major contract between GE and India).

Mobile devices changed things again – Torvalds and git, Coplien and organiational patterns, Wing and computational thinking, Spolsky and stackoverflow, Robert Martin and clean code (2008). Consolidation into cloud: Shafer and Debois on devops (2008), context becoming important. Brad Cox and componentized structures, service-oriented architectures and APIs, Jeff Dean and platform computing, Jeff Bezos.

And here we are today: Ambient computing, systems are everywhere and surround us. Software-intensive systems are used all the time, trusted, and there we are. Computer science focused on physics and algorithms, software engineering on process, architecture, economics, organisation, HCI. SWEBOK first 2004, latest 2014, codification.

Mathematical -> Symbolic -> Personal -> Distributed & Connected -> Imagined Realities

Fundamentals -> Complexity -> HCI -> Scale -> Ethics and morals

Scale is important – risk and cost increases with size. Most SW development is like engineering a city, you have to change things in the presence of things that you can’t change and cannot change. AI changes things again – symbolic approaches and connectionist approaches, such as Deepmind. Still a lot we don’t know what to do – such as architecture for AI, little rigorous specification and testing. Orchestration of AI will change how we look at systems, teaching systems rather than programming them.

Fundamentals always apply: Abstraction, separation, responsibilities, simplicity. Process is iterative, incremental, continuous releases. Future: Orchestrating, architecture, edge/cloud, scale in the presence of untrusted components, dealing with the general product.

“Software is the invisible writing that whispers the stories of possibility to our hardware…” Software engineering allows us to build systems that are trusted.

Sources: https://twitter.com/Grady_Boochhttps://computingthehumanexperience.com/

Interesting interview with Rodney Brooks

sawyer_and_baxterBoingboing, which is a fantastic source of interesting stuff to do during Easter vacation, has a long and fascinating interview by Rob Reid with Rodney Brooks, AI and robotics researcher and entrepreneur extraordinaire. Among the things I learned:

  • What the Baxter robot really does well – interacting with humans and not requiring 1/10 mm precision, especially when learning
  • There are not enough workers in manufacturing (even in China), most of the ones working there spend their time waiting for some expensive capital equipment to finish
  • The automation infrastructure is really old, still using PLCs that refresh and develop really slowly
  • Robots will be important in health care – preserving people’s dignity by allowing them to drive and stay at home longer by having robots that understand force and softness and can do things such as help people out of bed.
  • He has written an excellent 2018 list of dated predictions on the evolution of robotic and AI technologies, highly readable, especially his discussions on how to predict technologies and that we tend to forget the starting points. (And I will add his blog to my Newsblur list.)
  • He certainly doesn’t think much of the trolley problem, but has a great example to understand the issue of what AI can do, based on what Isaac Newton would think if he were transported to our time and given a smartphone – he would assume that it would be able to light a candle, for instance.

Worth a listen..

Concorde moment

british_airways_concorde_g-boac_03I recently searched for the term “Concorde moment” and did not find it. The term has appeared on Top Gear some years ago (though I can’t find the clip), probably mentioned by James May (who knows something about technology evolution) or Jeremy Clarkson (who certainly lamented the passing of the Concorde many times.) What “Concorde moment” means, essentially, is (as Clarkson says in the video below) “a giant step backward for mankind”.

The Concorde is still the fastest passenger jet ever made (3.5 hours from London to New York) and still the most beautiful one. In the end, it turned out to be too noisy, too polluting, and too expensive, never really making money. But it sure looked impressive. I never got to go on one, despite working in an international consulting company and jetting back and forth across the pond quite a bit. But my boss once bamboozled someone into bleeding for the ticket, and lived off the experience for a long time.

palm_graffiti_gesturesA Concorde moment, in other words, is a situation where a groundbreaking technology ceases to be, despite clearly being (and remaining) best in class, for reasons that seem hard to understand. Other examples may include

  • the Palm Pilot with its Graffiti shorthand system, once used by businesspeople all over the world (and by my wife to take impressive notes in all her studies)
  • the Apollo space program – we last went to the moon in 1972, with Apollo 17, and have not been back since. 45 years without going back has resulted in some impressive conspiracy theories, but again, the lack of any scientific or economic reason for going there is probably why it hasn’t happened.
  • the Bugatti Veyron, at least according to Top Gear. Personally, I find the announced new Tesla Roadster much more exciting, but, well, everyone is entitled to an opinion.
  • and, well, suggestions?

A tour de Fry of technology evolution

There are many things to say about Stephen Fry, but enough is to show this video, filmed at Nokia Bell Labs, explaining, amongst other things, the origin of microchips, the power of exponential growth, the adventure and consequences of performance and functionality evolution. I am beginning to think that “the apogee, the acme, the summit of human intelligence” might actually be Stephen himself:

(Of course, the most impressive feat is his easy banter on hard questions after the talk itself. Quotes like: “[and] who is to program any kind of moral [into computers ]… If [the computer] dives into the data lake and learns to swim, which is essentially what machine learning is, it’s just diving in and learning to swim, it may pick up some very unpleasant sewage.”)

Notes from ACM Webinar on blockchain (etc.)

The Next Radical Internet Transformation: How Blockchain Technology is Transforming Business, Governments, Computing, and Security Models

Speaker: Mark Mueller-Eberstein, CEO & Founder at Adgetec Corporation, Professor at Rutgers University, Senior Research Fellow at QIIR

Moderator: Toufi Saliba, CEO, PrivacyShell and Chair of the ACM PB Conference Committee

Warning: These are notes taken live. Errors and omissions will occur. No responsibility whatsoever.

  • intro: old enough to remember the discussions in the early 90s about how the internet would change mail services – completely forgetting shopping, entertainment and others
  • Blockchain solves the problem of transferring value between Internet users without a third party
  • goes beyond the financial industry, can handle any kind of transaction
  • most of the world has access to a mobile phone, only about 20% has access to the banking system
  • Blockchain is the banking industry’s Uber movement
  • Blockchain much wider than Bitcoin, will facilitate new business models.
  • Blockchain transfers rather than copies digital assets, making sure there is only one instance of it.
    • settlement process: no clearing houses or central exchanges
    • peer-to-peer transfers, validation by network
  • Example: WeChat taking over payments in China, no link to banks
  • many commercial or government services are basically “databases” that are centrally managed, with one central point of failure
  • Blockchain allows a distributed ledger, information put in cannot be changed
    • Estonia thinking about a Blockchain in case of hacking or occupation
  • public (open), private and government blockchainsxx1
  • allows new services to existing customers, lots of inefficiencies up for grabs
    • estate records, voting, domain control, escrow, etc…
    • iPayYou allows use of Bitcoin
    • Walt Disney looking at Blockchain (DragonChain) for internal transfers, also use it for tracking supply chain to their cruise ships. Opensourced it.
  • 80% of Bitcoin mining done in China
  • regulation comes with a cost
  • Shenzhen want to be Blockchain Tech capital
  • 6-level security model, developed by William Mougayar (goes through it in detail: transaction, account, programming, distributed organizations, network (51% attacks, perhaps as low as 30%, smaller blockchains more vulnerable), governance)
  • Ethereum blockchain focusing on smart contracts: Hard forked in 2016, DAO issue where somebody hacked DAO code to siphon off money, hacking the program using the blockchain (not the blockchain),
  • credit card transaction can take up to 30 days, with disputes and everthing, Blockchain is almost instant
  • How “real” is blockchain technology
    • Goldman-Sachs invested $500m+
    • 15% of top global banks intend to roll out full-scale, commercial blockchain
    • etc.
  • what is holding it back?
    • difficult to use, understand, buy in; perception of risk and legality
    • difficult to see value for the individual
  • questions:
    • what are the incentives and adoption models?
      • different philosophies: computing power must be made available in the network: industrial mining vs. BitTorrent model, the amount of computing provided will be important, if we can find a model where just a little bit from every mobile phone is required
    • what are the hard costs of Blockchain?
      • you can google the costs. There are other approaches being developed, will post some links
    • can Blockchain be compromized by a virus?
      • theoretically, yes. Bitcoin is 10 years without, open source means verification (change is happening slowly because of code inspection)
      • comes back to incentive and governance model
  • and that was that…recording will be at webinar.acm.org in a few days.

SmartHelp – geolocation for crisis situations

I am on the board of SmartHelp – a platform for crisis communication for emergency services (or, indeed, for any company that needs to locate its assets or employees in a hurry). The platform has been running in production in two emergency services (fire and ambulance) in Trondheim, Norway, since December 2014. It allows the public to contact the emergency service via a Smartphone interface, give precise details about where they are automatically, and also to chat and share their medical information (fully encrypted up to a medical standard.)

Here is a video demonstrating how the system works:

We are currently seeking partners for marketing and further developing this platform outside the Norwegian emergency service market. Please contact me (self@espen.com, +47 4641 0452) or Fredrik Øvergård, CEO (fredrik@radvice.no, +47 977 32 708)  for further information.

Does someone have to die first?

double-classroomBlogpost for ACM Ubiquity, intro here:

Digital technology changes fast, and organizations change slowly: First using the technology as an automated, digitized version of the old way of doing things, then gradually understanding that in order to achieve productivity and functional breakthroughs. We need to leave the old metaphors behind. For this to happen, we need new mindsets, unfettered by the old way of using the technology. I wonder if my generation has the capability to do it.

Read the rest at ACM Ubiquity: Does someone have to die first?

Accenture and connected health

(Notes from an Executive Short Program called Digitalization for Growth and Innovation, hosted by Ragnvald Sannes and yours truly, in Sophia Antipolis right now.Disclaimer: These are my notes, I am writing fast and might get something wrong, so nothing official by Accenture or anyone else.)

Andy Greenberg is relatively new to Accenture, having a background in various technology companies involved in health and fitness monitoring.

The Internet of Things is the next era in computing, we are moving to the second half of the chessboard, Moore’s law is still active. Everything gets faster all the time, sensors cheaper, more and more connections and kinds of connections becoming available. A lot of the data growth has been driven by sensors. Smartphones everywhere, but can’t be assumed in the health space.

We need to capture the data, and we can’t send it all away – so we have to do data analytics on the edge, i.e. do analysis right away. You have to think about some things, such as engineers designing for engineers is not a good thing, and that if you can do something – such as connecting a device – it does not necessarily mean that you should do it. However, there will be 25 to 50b connected devices in the next few years – and it can deliver value. Tesla, for instance, can update its cars  instead of recalling them, improving customer satisfaction and saving money. An Airbus can send messages about needed parts, in the future they will be 3D printed at the airport before they land. There is a large gap between how many CEOs think IoT is important and how many have any kind of capability to do it.

IoT has enormous potential in health care. We have an aging population – and that is true of the health providers as well. Patients have different expectations: “health consumers are becoming consumers, comparing their experience not to the last doctor’s visit, but the last time they bought something on Amazon”. Spending on healthcare is increasing, as is the number of connected and connectable devices.

IoT enables connected health services, including merging the experience at home and at hospital, feeding data from home and feeding treatment from hospital to home after a hospital stay. The key is to understand the complexity of where people are at different times and manage accordingly, as opposed to thinking that they are either one kind of patient and another – we are all different types of patients at one time or another. Key is to focus on preventing readmission to hospital, but there might be more value in managing the healthy population – focusing only on the high risk patients may not be the right strategy. (Dee Edington – Zero Trends). It is not just about getting the ill well, but keeping the well well.

Moore’s law works both for fitness devices and medical devices. For fitness devices, wireless offloading of data makes a real difference, the holy grail is when the data offload disappear completely, if something monitors you all the time and alerts you to do something then you are more likely to use it. Medical devices have been more about diagnosis, now moving into monitoring and adherence. Proteus Digital Health, for instance, has a smart pill that monitors that it is being taken, for instance. Problem is that you need to wear a patch, and the first drug it is being applied to is one for schizophrenia – in other words, the patients that are most likely to be paranoid… There is also work done with smart devices, such as asthma inhalers, which can track how much it is used, geolocate, match to other people using inhalers same day, track pollen count etc. Find covariation from individual and communal data.

Healthcare players need to understand consumer expectations – Disney spent more than one million on wristbands to make the interaction with their parks much more frictionless. Healthcare providers should do the same thing – help their patients navigate through their services – including hospitals – to make the experience more seamless. This is happening in the pharmaceutical space: About half of all presecriptions are either not filled or taken incorrectly. Some names: Gecko Health, Propeller Health, Adherium, Inspiro medical.

When you add connectivity to the mix, it changes everything. One challenge is that even though the value is clear, the person receiving it may not be the one paying for it. This means that many device innovations are seen by their creators as a way to be unique, that will change over time because the value is much bigger of things being standardized and more widely distributed. You also need to standardize to the lowest common denominator from a connectivity perspective. Security is obviously an important issue as well.

Q: How do you make a secure app, how do you handle security?

Andy: Only a minority has a code on their phone, so you need a separate login. Security has be be part of the design from the very beginning. The biggest piece of guidance is to understand that.

Q: Can you see health care become completely digitalized?

Andy: Health care will always have a large human element, and there are huge hurdles in interoperability, but in between 5 and 10 years we will see significant action. The technology is not the problem any more, it is all about adoption.

Francis: We are stuck in a fee for service model that is, in my opinion, broken. Should move to a value model, and digitization can help with that.

Q: Where will we see the first real use of it?

Andy: Already seeing that, pockets of it. Maybe the most interesting and recent adaptation is the use of telemedicine about mental health. The VA hospitals are doing that to allow face-to-face conversations with clients with mental health issues. The key here is having payers pay for this as legitimate treatment. Remote monitoring is coming along. Change in payment models and health plans that change prices if you carry a device also drives this.

Q: The nordics are a bit of digital laggards – what will happen here?

Andy: The nordics tend to be ahead in technology and behind in business models. The aging population is a driver and Asia is a big area for that. Regulatory constraints are going to be a big hurdle, some countries are so high on privacy that they make it almost impossible to even try. Payment is important – if governments say they are willing to pay for making the elderly stay home longer, then it will come.

Elon, I want my data!

Last week I got a parking ticket. I stopped outside BI Norwegian Business School where I work, to run in and deliver some papers and pick up some computer equipment. There is a spot outside the school where you can stop for 10 minutes for deliveries. When I came out, I had a ticket, the attendant was nowhere in sight – and I am pretty sure I had not been there for 10 minutes. But how to prove that?

Then it dawned on me – I have a Tesla Model S, a very innovative car – not just because it is electric, but because it is constantly connected to the Internet and sold more as a service than a product (actually, sold as a very tight, proprietary-architecture product, much like whatever Apple is selling). Given that there is a great app where I can see the where the car is and how fast it is going, I should be able to get the log from Tesla and prove that I parked the car outside BI less than 10 minutes before the ticket was issued…

Well, not so fast. I called Tesla Norway and asked to see the log, and was politely turned down – they cannot give me the data (actually, they will not hand it over unless there is a court order, according to company policy.) A few emails back and forth have revealed that the location and speed data seen by the app is not kept by the internal system. But you can still find out what kind of driving has been done – as Elon Musk himself did when refuting a New York Times journalist’s bad review by showing that the journalist had driven the car harder and in different places than claimed. I could, for instance, use the data to find out precisely when I parked the car, even though I can’t show the location.

And this is where it gets interesting (and where I stop caring about the parking ticket and start caring about principles): Norway has a Personal Data Protection Act, which dictates that if a company is saving data about you, they not only have to tell you what they save, but you also have a “right of inspection” (something I confirmed with a quick call to the Norwegian Data Protection Authority). Furthermore, I am vice chairman of Digitalt Personvern,  an association working to repeal the EU data retention directive and know some of the best data privacy lawyers in Norway.

So I can probably set in motion a campaign to force Tesla Norway to give me access to my data, based on Norwegian law. Tesla’s policies may be American, but their Norwegian subsidiary has to obey Norwegian laws.

But I think I have a better idea: Why not, simply, ask Tesla to give me the data – not because I have a right to data generated by myself according to Norwegian law, but because it is a good business idea and also the Right Thing to do?

So, Elon Musk: Why not give us Tesla-owners direct access to our logs through the web site? We already have password-protected accounts there, storing documents and service information. I am sure some enterprising developer (come to think of it, I know a few myself, some with Teslas) will come up with some really cool and useful stuff to make use of the information, either as independent apps or via some sort of social media data pooling arrangement. While you are at it, how about an API?

Tesla has already shown that they understand business models and network externalities by doing such smart things as opening up their patent portfolio. The company is demonstrably nerdy – the stereo volume literally goes to 11. Now it is time to open up the data side – to make the car even more useful and personable.

PS: While I have your attention, could you please link the GPS to the pneumatic suspension, so I can set the car to automatically increase road clearance when I exit the highway onto the speed-bumpy road to my house? Being able to take snapshots with the reverse camera would be a nice hack as well, come to think of it. Thanks in advance! (And thanks for the Rdio, incidentally!)

Update a few hours later: Now on Boingboing!

Update Sept. 2: The parking company (Europark) dropped the ticket – didn’t give a reason, but probably not because I was parked too long but because I was making a delivery and could park there.

The disrupted history professor

Jill Lepore, Harvard HistorianProfessor Jill Lepore, chair of Harvard’s History and Literature program, has published an essay in the New Yorker, sharply critical of Clayton Christensen and his theory of disruptive innovations. The essay has generated quite some stir, including a rather head-shaking analysis by Will Oremus in Slate.

I find Lepore’s essay rather puzzling, and, quite frankly, unworthy of a professor of history, Harvard or not. At this point, I should say that I am not an unbiased observer here – clayClay is a personal friend of mine, we went through the doctoral program at Harvard Business School together (he started a year before me), he was on my thesis committee (having graduated three years ahead of me) and we have kept in touch, including him coming to Norway for a few visits and one family vacation including a great trip on Hurtigruten. Clay is commonly known as the “gentle giant” and one of the most considerate, open and thoughtful people I know, and seeing him subjected to vituperating commentary from morons quite frankly pains me.

Professor Lepore’s essay has one very valid point: Like any management idea, disruptive innovation is overapplied, with every technology company or web startup claiming that their offering is disruptive and therefore investment-worthy. As I previously have written: If a product is described as disruptive, it probably isn’t. A disruptive product is something your customers don’t care about, with worse performance than what you have, and with lower profit expectations. Why in the world would you want to describe your offering as disruptive?

That being said, professor Lepore’s (I will not call her Jill, because that seems to be a big issue for some people. But since I have met Clay (most recently last week, actually), I will refer to him as Clay)  essay shows some remarkable jumps to non-conclusions: She starts out with a very fine summary of what the theory of disruption says:

Christensen was interested in why companies fail. In his 1997 book, “The Innovator’s Dilemma,” he argued that, very often, it isn’t because their executives made bad decisions but because they made good decisions, the same kind of good decisions that had made those companies successful for decades. (The “innovator’s dilemma” is that “doing the right thing is the wrong thing.”) As Christensen saw it, the problem was the velocity of history, and it wasn’t so much a problem as a missed opportunity, like a plane that takes off without you, except that you didn’t even know there was a plane, and had wandered onto the airfield, which you thought was a meadow, and the plane ran you over during takeoff. Manufacturers of mainframe computers made good decisions about making and selling mainframe computers and devising important refinements to them in their R. & D. departments—“sustaining innovations,” Christensen called them—but, busy pleasing their mainframe customers, one tinker at a time, they missed what an entirely untapped customer wanted, personal computers, the market for which was created by what Christensen called “disruptive innovation”: the selling of a cheaper, poorer-quality product that initially reaches less profitable customers but eventually takes over and devours an entire industry.

She then goes on to say that the theory is mis- and overapplied, and I (and certainly Clay) couldn’t agree more. Everyone and their brother is on an innovation bandwagon and way too many consulting companies are peddling disruption just like they were peddling business process reengineering back in the nineties (I worked for CSC Index and caught the tail end of that mania. Following this, she points out that Clay’s work is based on cases (it is), is theory-building rather than theory-confirming (yep) and that you can find plenty of cases of things that were meant to be disruptive that weren’t, or companies that were disruptive but still didn’t succeed. All very well, though, I should say, much of this is addressed in Clay’s later books and various publications, including a special issue of Journal of Product Innovation Management.

(Curiously, she mentions that she has worked as an assistant to Michael Porter‘s assistant, apparently having a good time and seeing him as a real professor. She then goes on to criticise the theory of disruptive innovation as having no predictive power – but the framework that Porter is most famous for, the five forces, has no predictive power either: It is a very good way to describe the competitive situation in an industry by offers zero guidance as to what you actually should do if you are, say, in the airline industry, which scores very badly on all five dimensions. There is a current controversy between Clay and Michael Porter on where the Harvard Business School (and, by implication, business education in general) should go. The controversy is, according to Clay, mostly “ginned up” in order to make the Times article interesting, but I do wonder what professor Lepore’s stakes are here.)

The trouble with management ideas is that while they can be easily dismissed when commoditized and overapplied, most of them actually start out as very good ideas within their bounds. Lepore feels threatened by innovation, especially the disruptive kind, because it shows up both in her journalistic (she is a staff writer with the New Yorker) and academic career. I happen to think that the framework fits rather well in the newspaper industry, but then again, I have spent a lot of time with Schibsted, the only media company in the world that has managed to make it through the digital transition with top- and bottom-line growth, largely by applying Clay’s ideas. But for Lepore, innovation is a problem because it is a) unopposed by intellectuals, b) happening too fast, without giving said intellectuals time to think, and c) done by the wrong kind of people (that is, youngsters slouching on sofas, doing little work since most of their attention is spent on their insanely complicated coffee machines, which “look like dollhouse-size factories”.) I am reminded of “In the beginning…was the command line.”, Neal Stephenson‘s beautiful essay about technology and culture, where he points out that in

… the heyday of Communism and Socialism, [the] bourgeoisie were hated from both ends: by the proles, because they had all the money, and by the intelligentsia, because of their tendency to spend it on lawn ornaments.

And then Lepore turns bizarre, saying that disruptive innovation does not apply in journalism (and, by extention, academia) because “that doesn’t make them industries, which turn things into commodities and sell them for gain.” Apparently, newspapers and academia should be exempt from economic laws because, well, because they should. (I have had quite a few discussions with Norwegian publishing executives, who seem to think so for their industry, too.)

I think newspapers and academic institutions are industries – knowledge industries, operating in a knowledge economy, where things are very much turned into commodities these days, by rapidly advancing technology for generating, storing, finding and communicating information. The increased productivity of knowledge generation will mean that we will need fewer, but better, knowledge institutions. Some of the old ones will survive, even prosper. Some will be disrupted. Treating disruptive innovation as a myth certainly is one option, but I wish professor Lepore would base that decision on something more than what appears to be rhetorical comments, a not very careful reading of the real literature, and, quite frankly, wishful thinking.

But I guess time – if not the Times – will show us what happens in the future. As for disruption, I would rather be the disruptor than the disruptee. I would have less money and honor, but more fun. And I would get to write the epitaph.

But then again, I have an insanely complicated coffee machine. And now it is time to go and clean it.

Disruptive is not quite as disruptive, it seems

Reuters has a great little tool showing the evolution of various buzzwords (via Boingboing). One of the worrying things is that “disruptive” is showing a remarkable growth:

image

I see this tendency (as it is with most buzzwords) that anything new (be it a technology or something else) that replaces something old is termed “disruptive”. A disruptive technology or innovation, however, as coined by Clayton Christensen, is an innovation where the incumbent companies are the ones least able to respond to it. This tends to be because the new product or service has these characteristics:

  1. Your best customers don’t want it. These demanding customers (and you want demanding customers, right?) are willing to pay top dollar for a better product – hence you try to make your product better to suit them. You then ignore the customers who does not need, nor are willing to pay, for the performance.
  2. Its performance is worse – at least in the dimensions traditionally used to measure performance. In Christensen’s original example – the disk drive industry – the existing customers wanted hard drives with more storage and higher access speed. They initially ignored the physical size of the disk drive, allowing new companies to gain dominance as new, physically smaller disk drives became available.
  3. If you entered that market, you would lose money. A disruptive innovation attacks from below – with lower profit margins. A former CEO of a minicomputer company expressed it this way: “When the PCs came, we had a choice: Selling $200,000 minicomputers with 60% profit margins, or $4,000 PCs with 20% profit margins. What would you choose?”

The funny thing is, companies launching new products keep calling them “disruptive” – do they really want to say that their products are undesirable, poor and offering low profit margins? They might want to say that, but in my view most real disruptors prefer to keep their mouths shut and build their profitability under the radar of their entrenched competitors.

In other words, if a product is launched as disruptive, it probably isn’t.

MIT CISR Research Briefing on Enterprise Search

imageLast year I had the pleasure of spending time as a visiting scholar at MIT Center for Information Systems Research, and here is one of the results, now available from CISR’s website (free, but you have to register. Absolutely worth it, lots of great material):

Research Briefing: Making Enterprise Search Work: From Simple Search Box to Big Data Navigation; Andersen; Nov 15, 2012

Most executives believe that their companies would perform better if “we only knew what we knew.” One path to such an objective is enhanced enterprise search. In this month’s briefing, “Making Enterprise Search Work: From Simple Search Box to Big Data Navigation,” Visiting Scholar Espen Andersen highlights findings from his research on enterprise search. He notes that enterprise search plays a different role from general web or site-specific searches and it comes with its own unique set of challenges – most notably privacy. But companies trying to capitalize on their knowledge will invariably find search an essential tool. Espen offers practical advice on how to develop a more powerful search capability in your firm.

Introduction to GRA6834

This is an intro to a course (GRA6834 Business Development and Innovation Management) I am giving in the Fall, open to M.Sc. students at BI Norwegian School of Management, posted here because, well, I couldn’t be there to do the presentation myself. The course description is here, my presentation slides are here, I forgot to say when to turn to the next slide in the presentation (so you’ll have to guess from circumstance), and I do apologize for the rather booming voice, but this is what I could do on rather short notice…

If you have any questions, please email me.

The solution to American unemployment…

(Flash thought as I am listening to Erik Brynjolfsson and Andy McAfee talk about Race Against the Machine at the MIT Center for Digital Business research conference – an excellent event, by the way.)

The core issue identified in Race Against the Machine is that technology improves faster than humans. Consequently, a rising number of people get automated out of a job. Previously, that has not been a long-term problem, because new industries have sprung up to hire. Now, however, the new industries hire very few people (haven’t checked the facts, but someone said that Facebook, Google, Twitter and Amazon collectively have about 100,000 employees, which is the job growth needed per month to keep up with population growth in the US workforce.)

So – we need to find new areas where we can hire lots of people, to do jobs that, at least as of now cannot be automated.

Here is my tongue-in-cheek solution:

1. The US has a rising (or, perhaps, expanding) obesity problem.

2. Obesity is expensive, since obese people disproportionately consume health care.

3. Take all the unemployed, sort them into a) thin and b) thick.

4. Hire group a) to be personal coaches to group b).

5. Pay for it with the savings in health costs.

Great, job done. Now for some real work…

(On a serious note, first-line health care is probably an area that could consume a lot of workers. On the other hand, it will also experience many job losses – health care is vastly inefficient in the US now, primarily because it is so cumbersome to administrate and pay for.)

Update 5/24: I was wrong – personalized weight loss coaching is now available as an app.

Tips and tricks swap meet

Today I hosted a brown bag lunch with researchers from BI’s Technology Strategy group and MIT CISR. The objective was to get to know each other, but every meeting needs a topic, so I asked people to bring their computers and share a few smart things, useful web sites and other things they have discovered, that people wouldn’t know about.

Here is a list of some of the smart tricks and tools people came up with:

  • If you need to edit a large document in Word, create a table of contents, place it at the beginning of the document – and jump to the right chapter or subsection by control-clicking on the TOC. (Alternatively, use the document map feature, see this blog post.)
  • Pressing . (period) while in presentation mode in Powerpoint will give you a black screen, pressing the same key again gives you the slide back. Useful for making people listen to you rather than read the slide.
  • A tablet computer is useful for presentations: Draw on slides, use Windows Journal to sketch out diagrams and drawings – which you can then PDF and make available to students.
  • This article explains how to get rid of New York Times cookies with a bookmarklet.
  • Google Reader (since discontinued, use Newsblur instead) lets you read RSS feeds quickly and easily.
  • Clearly from Evernote is a great tool for reading webpages – removes unnecessary clutter and lets you save the page to Evernote.
  • Think-Cell is a great tool for creating charts in Powerpoint, faster and simpler and more good-looking than standard Excel.
  • Whenisgood.net is great for finding possible meeting times.
  • The Meeting Planner from timeanddate.com is useful.
  • If this then that lets you automate certain web tasks by monitoring information streams and taking action based on their results.
  • Hipmunk is great for finding flights quickly, has a great graphical display.
  • In Word, under the File/Open or File/Recent menu choice, there are little pushpin symbols that, if pushed, will make sure the document stays visible in the list.
    Very useful for keeping the position of frequently used documents that are stored in SharePoint without having to go through a lengthy access procedure.

The fun thing with a little meeting like this is that everyone comes away with at least one or two things they hadn’t thought about – which is more than you can say for most meetings.

A value chain at work

This old footage (via egmCartTech) of the 1936 production process at Chevrolet’s plant in Flint, Michigan, shows a value chain at work – i.e., a process where value is added in small, repeatable, sequential steps. This is how many people still see companies…

It is notable for many things – the relative imprecision of the production (dents in parts being marked for later fixes), the simple design of the cars (two-box design built on a frame, soon to be overtaken by the monocoque design already introduced with Chryslers’ 1934 Airflow), and the notion of the human as the servant of the machine, doing simple things repetitively and to be attempted replaced by robots in the 70s and 80s as production became increasingly componentized. Toyota eventually introduced the Kan-Ban principle, where each worker is responsible for the quality control of previous work and can stop the process. But no wonder GM had quality problem as designs got more complex…

Competing online at Lorange

I have just finished (as a matter of fact, I am writing this from the classroom while the students are taking their exam) teaching a two-day seminar called Competing online at Lorange Institute of Business, located in Horgen, a small town about half an hour south of Zürich in Switzerland. Teaching is normally quite tiring, but this time it was a breeze – firstly because it was only 9 students, secondly because they all had interesting experiences and viewpoints on how to use the Internet and Web 2.0 for business and personal purposes. As a consequence, I could run the class as an informal discussion, with less lecturing and quite a bit of learning for me as well as the students.

The diversity of backgrounds was quite interesting – we had three people that owned their own companies (technical textile manufacturing, logistics, and personal credit), three from pharmaceuticals and health companies, one from sports event marketing, one executive from a hotel company, and, last but not least, Isabella Löwengrip, who with her blog Blondinbella could provide very interesting perspectives on how to establish and promote a business on Web 2.0. She did, of course, blog (here and here and here) and Tweet about the experience, occasionally in real time – and she took the pictures you see here.

Linus Murphy, lively and inspiring CEO of Masterstudies.com, was the main case under discussion after lunch on the first day – and he did a great job talking about the importance of making your company findable on Google. To do this, you have to make sure your content is fresh and not duplicated, that each page is about one thing only (so the search engine is not confused) and design the structure and context of the web site before handing it over to be made pretty by a designer. When most of your traffic is driven by search, you must be both findable and searchable.