Category Archives: Digitalization

On videoconferencing and security

Picture: Zoom

Yesterday began with a message from a business executive who was concerned with the security of Zoom, the video conferencing platform that many companies (and universities) have landed on. The reason was a newspaper article regurgitating several internet articles, partly about functionality that has been adequately documented by Zoom, partly about security holes that have been fixed a long time ago.

So is there any reason to be concerned about Zoom or Whereby or Teams or Hangouts or all the other platforms?

My answer is “probably not” – at least not for the security holes discussed here, and for ordinary users (and that includes most small- to medium sized companies I know about).

It is true that video conferencing introduces some security and privacy issues, but if we look at it realistically, the biggest problem is not the technology, but the people using it (Something we nerds refer to as PEBKAC – Problem Exists Between Keyboard and Chair.)

When a naked man sneaks into an elementary school class via Whereby, as happened a few days ago here in Norway, it is not due to technology problems, but because the teacher had left the door wide open, i.e., had not turned on the function that makes it necessary to “knock” and ask for permission to enter.

When anyone can record (and have the dialogue automatically transcribed) from Zoom, it is because the host has not turned off the recording feature. By the way, anyone can record a video conference with screen capture software (such as Camtasia), a sound recorder or for that matter a cell phone, and no (realistic) security system in the world can do anything about it.

When the boss can monitor that people are not using other software while sitting in a meeting (a feature that can be completely legitimate in a classroom, it is equivalent to the teacher looking beyond the class to see if the students are awake), well, I don’t think the system is to blame for that either. Any leader who holds such irrelevant meetings that people do not bother to pay attention should rethink their communications strategy. Any executive I know would have neither time nor interest in activating this feature – because if you need technology to force people to wake up, you don’t have a problem technology can solve.

The risk of a new tool should not be measured against some perfect solution, but against what the alternative is if you don’t have it. Right now, video conferencing is the easiest and best tool for many – so that is why we use it. But we have to take the trouble to learn how it works. The best security system in the world is helpless against people writing their password on a Post-It, visible when they are in videoconference.

So, therefore – before using the tool – take a tour of the setup page, choose carefully what features you want to use, and think through what you want to achieve by having the meeting.

If that’s hard, maybe you should cancel the whole thing and send an email instead.

Getting dialogue online

Bank in the nineties, I facilitated a meeting with Frank Elter at a Telenor video meeting room in Oslo. There were about 8 participants, and an invited presenter: Tom Malone from MIT.

The way it was set up, we first saw a one hour long video Tom had created, where he gave a talk and showed some videos about new ways of organizing work (one of the more memorable sequences was (a shortened version of) the four-hour house video.) After seeing Tom’s video, we spent about one hour discussing some of the questions Tom had raised in the video. Then Tom came on from a video conferencing studio in Cambridge, Massachusetts, to discuss with the participants.

The interesting thing, to me, was that the participants experienced this meeting as “three hours with Tom Malone”. Tom experienced it as a one hour discussion with very interested and extremely well prepared participants.

A win-win, in other words.

I was trying for something similar yesterday, guest lecturing in Lene Pettersen‘s course at the University of Oslo, using Zoom with early entry, chat, polling and all video/audio enabled for all participants. This was the first videoconference lecture for the students and for three of my colleagues, who joined in. In preparation, the students had read some book chapters and articles and watched my video on technology evolution and disruptive innovations.

For the two hour session, I had set up this driving plan (starting at 2 pm, or 14:00 as we say over here in Europe…):

Image may contain: Espen Andersen, eyeglasses

Leading the discussion. Zoom allows you to show a virtual background, so I chose a picture of the office I would have liked to have…

14:00 – 14:15 Checking in, fiddling with the equipment and making sure everything worked. (First time for many of the users, so have the show up early so technical issues don’t eat into the teaching time.)
14:15 – 14:25 Lene introduces the class, talks about the rest of the course and turns over to Espen (we also encouraged the students to enter questions they wanted addressed in the chat during this piece)
14:25 – 14:35 Espen talking about disruption and technology-driven strategies.
14:35 – 14:55 Students into breakout rooms – discussing whether video what it would take for video and digital delivery to be a disruptive innovation for universities. (Breaking students up into 8 rooms of four participants, asking them to nominate a spokesperson to take notes and paste them into the chat when they return, and to discuss the specific question: What needs to happen for COVID-19 to cause a disruption of universities, and how would such a disruption play out?
14:55 – 15:15 Return to main room, Espen sums up a little bit, and calls on spokesperson from each group (3 out of 8 groups) based on the notes posted in the chat (which everyone can see). Espen talks about the Finn.no case and raises the next discussion question.
15:15 – 15:35 Breakout rooms, students discuss the next question: What needs to happen for DNB (Norway’s largest bank) to become a data-driven, experiment-oriented organization like Finn.no? What are the most important obstacles and how should they be dealt with?
15:35 – 15:55 Espen sums up the discussion, calling on some students based on the posts in the chat, sums up.
15:55 – 16:00 Espen hand back to Lene, who sums up. After 16:00, we stayed on with colleagues and some of the students to discuss the experience.

zoom dashboard

The dashboard as I saw it. Student names obscured.

Some reflections (some of these are rather technical, but they are notes to myself):

  • Not using Powerpoint or a shared screen is important. Running Zoom in Gallery view (I had set it up so you could see up to 49 at the same time) and having the students log in to Zoom and upload a picture gave a feeling of community. Screen and/or presentation sharing breaks the flow for everyone – When you do it in Zoom, the screen reconfigures (as it does when you come back from a breakout room) and you have to reestablish the participant panel and the chat floater. Instead, using polls and discussion questions and results communicated through the chat was easier for everyone (and way less complicated).
  • No photo description available.

    Satisfactory results, I would say.

    I used polls on three occasions: Before each discussion breakout, and in the end to ask the students what the experience was like. They were very happy about it and had good pointers on how to make it better

  • We had no performance issues and rock-steady connection the whole way through.
  • It should be noted that the program is one of the most selective in Norway and the students are highly motivated and very good. During the breakout sessions I jumped into each room to listen in on the discussion (learned that it was best to pause recording to avoid a voice saying “This session is being recorded” as I entered. The students were actively discussing in every group, with my colleagues (Bendik, Lene, and Katja) also participating. I had kept the groups to four participants, based on feedback from a session last week, where the students had been 6-7 and had issues with people speaking over each other.
  • Having a carefully written driving plan was important, but still, it was a very intense experience, I was quite exhausted afterwards. My advice on not teaching alone stands – in this case, I was the only one with experience, but that will change very fast. But I kept feeling rushed and would have liked more time, especially in the summary sections, would have liked to bring more students in to talk.
  • I did have a few breaks myself – during the breakout sessions – to go to the bathroom and replenish my coffee – but failed to allow for breaks for the students. I assume they managed to sneak out when necessary (hiding behind a still picture), but next time, I will explicitly have breaks, perhaps suggest a five minute break in the transition from main room to breakout rooms.

Conclusion: This can work very well, but I think it is important to set up each video session based on what you want to use it for: To present something, to run an exercise, to facilitate interaction. With a small student group like this, I think interaction worked very well, but it requires a lot of presentation. You have to be extremely conscious of time – I seriously think that any two-hour classroom session needs to be rescheduled to a three hour session just because the interaction is slower, and you need to have breaks.

As Winston Churchill almost said (he said a lot, didn’t he): We make our tools, and then our tools make us. We now have the tools, it will be interesting to see how the second part of this transition plays out.

Dealing with cheating

At BI Norwegian Business School, we are (naturally and way overdue, but a virus crisis helps) moving all exams to digital. This means a lot of changes for people who have not done that before. One particular anxiety is cheating – normally not a problem in the subjects I teach (case- and problem oriented, master/executive, small classes) but certainly is an issue in large classes at the bachelor level, where many answers are easily found online, the students are many, and the subjects introductory in nature.

Here are some strategies to deal with this:

  • Have an academic honesty policy and have the students sign it as part of the exam. This to make them aware of they risk if they cheat.
  • Keep the exam time short – three hours at the max – and deliberately ask more questions than usual. This makes for less time for cheating (by collaborating) because collaboration takes time. It also means introducing more differentiation between the students – if just a few students manage to answer all questions, those are the A candidates. Obviously, you need to adjust the grade scale somewhat (you can’t expect all to answer everything) and there is an issue of awarding students that are good at taking exams at the expense of deep learning, but that is the way of all exams.
  • Don’t ask the obvious questions, especially not those asked on previous exams. Sorry, no reuse. Or perhaps a little bit (it is a tiring time.)
  • Tell the students that all answers will be subjected to an automated plagiarism check. Whether this is true or not, does not matter – plagiarism checkers are somewhat unreliable, have many false positives, and require a lot of afterwork – but just the threat will eliminate much cheating. (Personally, I look for cleverly crafted answers and Google them, amazing what shows up…).
  • Tell the students that after the written exam, they can be called in for an oral exam where they will need to show how they got their answers (if it is a single-answer, mathematically oriented course) or answer more detailed questions (if it is a more analysis- or literature oriented course). Who gets called in (via videoconference) will be partially random and partially based on suspicion. Failing the orals results in failing the course.
  • When you write the questions: If applicable, Google them, look at the most common results, and deliberately reshape the questions so that the answer is not one of those.
  • Use an example for the students to discuss/calculate, preferably one that is fresh from a news source or from a deliberately obscure academic article they have not seen before.
  • Consider giving sub-groups of students different numbers to work from – either automatically (different questions allocated through the exam system) or by having questions like “If your student ID ends in an even number (0,2,4,6,8) answer question 2a, otherwise answer question 2b” (use the student ID, not “birthday in January, February, March…” as this will be the only marker you have.) The questions may have the same problem, but with small, unimportant differences such as names, coefficients or others. This makes it much harder to collaborate for the students. (If you do multiple questions in an electronic context, I assume a number of the tools will have functionality for changing the order of the questions – it would, frankly, astonish me if they did not – but I don’t use multiple choice myself, so I don’t know.
  • Consider telling the students they will all get different problems (as discussed above) but not doing it. It still will prevent a lot of cheating simply because the students believe they all have different problems and act accordingly.
  • If you have essay questions, ask the students to pick a portion of them and answer them. I do this on all my exams anyway – give the students 6 questions with short (150 words) answers and ask them to pick 4 and answer only those, and give them 2 or 3 longer questions (400 words or so) and ask them to answer only one. (Make it clear that answering them all will result in only the first answered will be considered.) Again, this makes cheating harder.

Lastly: You can’t eliminate cheating in regular, physical exams, so don’t think you can do it in online exams. But you certainly can increase the disincentives to do so, and that is the most you can hope for.

Department for future ideas
I have always wanted to use machine learning for grading exams. At BI, we have some exams with 6000 candidates writing textual answers. Grading this surely must constitute cruel and unusual punishment. With my eminent colleague Chandler Johnson I tried to start a project where we would have graders grade 1000 of these exams, then use text recognition and other tools, build an ML model and use that to grade the rest. Worth an experiment, surely. The project (like many other ideas) never took off, largely because of difficulties of getting the data, but perhaps this situation will make it possible.

And that would be a good thing…

A teaching video – with some reflections

Last Thursday, I was supposed to teach a class on technology strategy for a bachelor program at the University of Oslo. That class has been delayed for a week and (obviously) moved online. I thought about doing it video conference, but why not make a video, ask the students to see it before class? Then I can run the class interactively, discussing the readings and the video rather than spending my time talking into a screen. Recording a video is more work, but the result is reusable in other contexts, which is why I did it in English, not Norwegian. The result is here:

To my teaching colleagues: The stuff in the middle is probably not interesting – see the first two and the last five minutes for pointers to teaching and video editing.

For the rest, here is a short table of contents (with approximate time stamps):

  • 0:00 – 2:00 Intro, some details about recording the video etc.
  • 2:00 – 27:30 Why technology evolution is important, and an overview of technology innovation/evolution processes
    • 6:00 – 9:45 Standard engineering
    • 9:45 – 12:50 Invention
    • 12:50 – 15:50 Structural deepening
    • 15:50 – 17:00  Emerging (general) technology
      • 17:00 – 19:45 Substitution
      • 19:45 – 25:00 Expansion, including dominant design
      • 25:00 – 27:30 Structuration
  • 27:30 – 31:30 Architectural innovation (technology phases)
  • 31:30 –  31:45 BREAK! (Stop the video and get some coffee…)
  • 31:45 – 49:40 Disruption
    • 31:45 – 38:05 Introduction and theory
    • 38:05 – 44:00 Excavator example
    • 44:00 – 46:00 Hairdresser example
    • 47:00 – 47:35 Characteristics of disruptive innovations
    • 47:35 – 49:40 Defensive strategies
  • 49:40 – 53:00 Things take time – production and teaching…
  • 53:00 – 54:30 Fun stuff

This is not the first time I have recorded videos, by any means, but it is the first time I have created one for “serious” use, where I try to edit it to be reasonably professional. Some reflections on the process:

  • This is a talk I have given many times, so I did not need to prepare the content much – mainly select some slides. for a normal course, I would use two-three hours to go through the first 30 minutes of this video – I use much deeper examples and interact with the students, have them come up with other examples and so on. The disruption part typically takes 1-2 hours, plus at least one hour on a specific case (such as the steel production). Now the format forces me into straight presentation, as well as a lot of simplification – perhaps too much. I aim to focus on some specifics in the discussion with the students.
  • I find that I say lots of things wrong, skip some important points, forget to put emphasis on other points. That is irritating, but this is straight recording, not a documentary, where I would storyboard things, film everything in short snippets, use videos more, and think about every second. I wanted to do this quickly, and then I just have to learn not to be irritated at small details.
  • That being said, this is a major time sink. The video is about 55 minutes long. Recording took about two hours (including a lot of fiddling with equipment and a couple of breaks). Editing the first 30 minutes of the  video took two hours, another hour and a half for the disruption part (mainly because by then I was tired, said a number of unnecessary things that I had to remove.)
  • Using the iPad to be able to draw turned out not to be very helpful in this case, it complicated things quite a bit. Apple’s SideCar is still a bit unpredictable, and for changing the slides or the little drawing on the slides I did, a mouse would have been enough.
  • Having my daughter as audience helps, until I have trained myself to look constantly into the camera. Taping a picture of her or another family member to the camera would probably work almost as well, with practice. (She has heard all my stories before…)
  • When recording with a smartphone, put it in flight mode so you don’t get phone calls while recording (as I did.) Incidentally, there are apps out there that allow you to use the iPhone as a camera connected to the PC with a cable, but I have not tested them. It is easy to transfer the video with AirPlay, anyway.
  • The sound is recorded in two microphones (the iPhone and a Røde wireless mic.) I found that it got “fatter” if I used both the tracks, so I did that, but it does sometime screw up the preview function in Camtasia (though not the finished product). That would also have captured both my voice and my daughter’s (though she did not ask any questions during the recording, except on the outtakes.)
  • One great aspect of recording a video is that you can fix errors – just pause and repeat whatever you were going to say, and the cut it in editing. I also used video overlays to correct errors in some slides, and annotations to correct when I said anything wrong (such as repeatedly saying “functional deepening” instead of “structural deepening”.) It does take, time, however…

My excellent colleague Ragnvald Sannes pointed out that this is indicative of how teaching will work in the future, from a work (and remuneration) perspective. We will spend much more time making the content, and less time giving it. This, at the very least, means that teachers can no longer be paid based on the number of hours spent teaching – or that we need to redefine what teaching means…

Moving your course online: Five things to consider

Another video on moving to video-based teaching, this time about some things to consider to make the transition as easy for yourself as possible (as well as increasing the experience for the students):

From the Youtube posting:

Many teachers now have to move their courses online, and are worried about it. Teaching online is different from teaching in a classroom, but not so different: The main thing is still that you know your material and care about the the people at the other end. There are some things to consider, however, so here are five tips to think about when you move your course online:

  1. Talk to one student, not many.
  2. Structure, structure, structure (much more important in online teaching).
  3. Interaction is possible, but needs to be planned.
  4. Bring a friend: Teach with a colleague, for mutual help and a better experience.
  5. Use the recording as a tool for making your teaching better, by reviewing it and editing it yourself.

Five tips for better video teaching

In these viral times, a lot of universities will need to switch to video teaching, and for many teachers, this is a new experience. Here is a short (and fast) video I made with five – non-technical – tips for better video conferencing and teaching.

To sum it up:

1. Sound is more important than picture.
2. Look into the camera!
3. Don’t make the obvious mistakes: Background, lighting, and clothing.
4. Be lively! The medium consumes energy, you need to compensate.
5. Get to know the tools.

Good luck!

Teaching with cases online

teaching_with_case_onlineCase teaching is not just for the classroom – increasingly, you can (and some schools do) offer discussion-based (or, at least, interaction-courses online.) My buddy Bill Schiano and I wrote a long note on how to do this for Harvard Business Publishing back in 2017 – and I then completely forgot about it.

Recently, I rediscovered it online, published by HBP – so, here goes: Schiano, Bill and Espen Andersen (2017): Teaching with Cases Online, Harvard Business Publishing. Enjoy. (More resources available at the HBP Teaching Center page.)

Incidentally – should you (as a school) consider doing courses this way, beware that this is not a cost reduction strategy. You probably will need to pay the online teachers more than those doing regular classrooms, simply because it is more work and quite a bit more design, at least in the beginning. But it may be an excellent way of reaching student groups you otherwise could not reach, for geographical or timing reasons.

Student cases of digitalization and disruption

I teach a M.Sc. class called “business development and innovation management”, and challenge students to write Harvard-style cases about real companies experiencing issues within these areas. The results are always fun and provide learning opportunities for the students: You may not provide the answer for the company, but you get a chance to really learn about one company or one industry and dive into the complexities and intricacies of their situation. That knowledge, in itself is valuable when you are hitting the job market.

Here is a (fairly anonymized) list of this year’s papers:

  • disruption in the analytics industry: One group is studying SAS Institute and how their closed software and architecture model is being challenged by open-source developments
  • disruption in the consulting industry: One group wants to study a small consulting company and how they should market some newly developed software that allows for automated, low-cost analysis
  • establishing a crypto-currency exchange: One group wants to study strategies for establishing a payment and exchange service for crypto-currencies
  • marketing RPA through a law firm: One group wants to study how a large law firm can market their internal capabilities for RPA (robotics process automation) in an external context
  • fast access to emergency services: One group wants to write a case on Smarthelp and how that service can be spread and marketed in a wider context
  • using technology to manage sports club sponsorship: One group wants to study how to develop strategy for a startup company that helps participation sports clubs with gain corporate sponsorships
  • electronic commerce and innovation in the agricultural equipment sector: One group wants to study how a vendor of farm equipment and supplies can extend their market and increase their innovative capability through ecommerce and other digital initiatives
  • machine learning in Indian banking: One group wants to study how machine learning could be used to detect money laundering in a large Indian bank
  • social media analysis in consumer lending: One group wants to study an Indian startup company that uses digital indicators from users’ online behavior to facilitate consumer financing for online purchases

Al in all, a fairly diverse set of papers – I am looking forward to reading them.

Analytics III: Projects

asm_topTogether with Chandler Johnson and Alessandra Luzzi, I currently teach a course called Analytics for Strategic Management. In this course (now in its third iteration), executive students work on real projects for real companies, applying various forms of machine learning (big data, analytics, whatever you want to call it) to business problems. We have just finished the second of five modules, and the projects are now defined.

Here is a (mostly anonymised, except for publicly owned companies) list:

  • An IT service company that provides data and analytics wants to predict customer use of their online products, in order to provide better products and tailor them more to the most active customers
  • A gas station chain company wants to predict churn in their business customers, to find ways to keep them (or, if necessary, scale down some of their offerings)
  • A electricity distribution network company wants to identify which of their (recently installed) smart meters are not working properly, to reduce the cost of inspection and increase the quality of
  • A hairdressing chain wants to predict which customers will book a new appointment when they have had their hair done, in order to increase repeat business and build a group of loyal customers
  • A large financial institution wants to identify employees that misuse company information (such as looking at celebrities’ information), in order to increase privacy and data confidentiality
  • NAV IT wants to predict which employees are likely to leave the company, in order to better plan for recruitment and retraining
  • OSL Gardermoen want to find out which airline passengers are more likely to use the taxfree shop, in order to increase sales (and not bother those who will not use the taxfree shop too much)
  • a bank wants to find out which of their younger customers will need a house loan soon, to increase their market share
  • a TV media company wants to find out which customers are likely to cancel their subscription within a certain time frame, to better tailor their program offering and their marketing
  • a provider of managed data centers wants to predict their customers’ energy needs, to increase the precision of their own and their customers’ energy budgets
  • Ruter (the public transportation umbrella company for the Oslo area) wants to build a model to better predict crowding on buses, to, well, avoid overcrowding
  • Barnevernet wants to build a model to better predict which families are most likely to be approved as foster parents, in order to speed up the qualification process
  • an electrical energy production company wants to build a model to better predict electricity usage in their market, in order to plan their production process better

All in all, a fairly typical set of examples of the use of machine learning and analytics in business – and I certainly like to work with practical examples with very clearly defined benefits. Over the next three modules (to be finished in the Spring) we will take these projects closer to fruition, some to a stage of a completed proposal, some probably all the way to a finished model and perhaps even an implementation.

The history of software engineering

grady_booch2c_chm_2011_2_cropped

The History of Software Engineering
an ACM webinar presentation by
ACM Fellow Grady Booch, Chief Scientist for Software Engineering, IBM Software
(PDF slides here.)

Note: These are notes taken while listening to this webinar. Errors, misunderstandings and misses aplenty…

(This is one of the perks of being a member of ACM – listening to legends of the industry talking about how it got started…)

Trust is fundamental – and we trust engineering because of licensing and certification. This is not true of software systems – and that leads us to software engineering. Checks and balances important – Hammurabi code of buildings, for instance. First licensed engineer was Charles Bellamy, in Wyoming, in 1907, largely because of former failures of bridges, boilers, dams, etc.

Systems engineering dates back to Bell labs, early 1940s, during WWII. In some states you can declare yourself a software engineer, in others licensing is required, perhaps because the industry is young. Computers were, in the beginning, human (mostly women). Stibitz coined digital around 1942, Tukey coined software in 1952. 1968-69 conference on software engineering coined the term, but CACM letter by Anthony Oettinger used the term in 1966, but the term was used before that (“systems software engineering”), most probably originated by Margaret Hamilton in 1963, working for Draper Labs.

Programming – art or science? Hopper, Dijkstra, Knuth, sees them as practical art, art, etc. Parnas distinguished between computer science and software engineering. Booch sees it as dealing with forces that are apparent when designing and building software systems. Good engineering based on discovery, invention, and implementation – and this has been the pattern of software engineering – dance between science and implementation.

Lovelace first programmer, algorithmic development. Boole and boolean algebra, implementing raw logic as “laws of thought”.

First computers were low cost assistants to astronomers, establishing rigorous processes for acting on data (Annie Cannon, Henrietta Leavitt.) Scaling of problems and automation towards the end of the 1800s – rows of (human) computers in a pipeline architecture. The Gilbreths created process charts (1921). Edith Clarke (1921) wrote about the process of programming. Mechanisation with punch cards (Gertrude Blanch, human computing, 1938; J Presper Eckert on punch car methods (1940), first methodology with pattern languages.

Digital methods coming – Stibitz, Von Neumann, Aitken, Goldstein, Grace Hopper with machine-independent programming in 1952, devising languages and independent algorithms. Colossus and Turing, Tommy Flowers on programmable computation, Dotthy du Boisson with workflow (primary operator of Colossus), Konrad Zuse on high order languages, first general purpose stored programs computer. ENIAC with plugboard programming, dominated by women, (Antonelli, Snyder, Spence, Teitelbaum, Wescoff). Towards the end of the war: Kilburn real-time (1948), Wilson and Gill subroutines (1949), Eckert and Mauchly with software as a thing of itself (1949). John Bacchus with imperative programming (Fortran, 1946), Goldstein and von Neumann flowcharts (1947). Commercial computers – Leo for a tea company in England. John Pinkerton creating operating system, Hoper with ALGOL and COBOL, reuse (Bener, Sammet). SAGE system important, command and control – Jay Forrester and Whirlwind 1951, Bob Evans (Sage, 1957), Strachey time sharing 1959, St Johnson with the first programming services company (1959).

Software crisis – not enough programmers around, machines more expensive than the humans, priesthood of programming, carry programs over and get results, batch. Fred Brooks on project management (1964), Constantin on modular programming (1968), Dijkstra on structured programming (1969). Formal systems (Hoare and Floyd) and provable programs; object orientation (Dahl and Nygaard, 1967). Main programming problem was complexity and productivity, hence software engineering (Margaret Hamilton) arguing that process should be managed.

Royce and the waterfall method (1970), Wirth on stepwise refinement, Parnas on information hiding, Liskov on abstract data types, Chen on entity-relationship modelling. First SW engineering methods: Ross, Constantine, Yourdon, Jackson, Demarco. Fagin on software inspection, Backus on functional programming, Lamport on distributed computing. Microcomputers made computing cheap – second generation of SW engineering: UML (Booch 1986), Rumbaugh, Jacobsen on use cases, standardization on UML in 1997, open source. Mellor, Yourdon, Worfs-Brock, Coad, Boehm, Basils, Cox, Mills, Humphrey (CMM), James Martin and John Zachman from the business side. Software engineering becomes a discipline with associations. Don Knuth (literate programming), Stallman on free software, Cooper on visual programming (visual basic).

Arpanet and Internet changed things again: Sutherland and SCRUM, Beck on eXtreme prorgamming, Fowler and refactoring, Royce on Rational Unified Process. Software architecture (Kruchten etc.), Reed Hastings (configuration management), Raymond on open source, Kaznik on outsourcing (first major contract between GE and India).

Mobile devices changed things again – Torvalds and git, Coplien and organiational patterns, Wing and computational thinking, Spolsky and stackoverflow, Robert Martin and clean code (2008). Consolidation into cloud: Shafer and Debois on devops (2008), context becoming important. Brad Cox and componentized structures, service-oriented architectures and APIs, Jeff Dean and platform computing, Jeff Bezos.

And here we are today: Ambient computing, systems are everywhere and surround us. Software-intensive systems are used all the time, trusted, and there we are. Computer science focused on physics and algorithms, software engineering on process, architecture, economics, organisation, HCI. SWEBOK first 2004, latest 2014, codification.

Mathematical -> Symbolic -> Personal -> Distributed & Connected -> Imagined Realities

Fundamentals -> Complexity -> HCI -> Scale -> Ethics and morals

Scale is important – risk and cost increases with size. Most SW development is like engineering a city, you have to change things in the presence of things that you can’t change and cannot change. AI changes things again – symbolic approaches and connectionist approaches, such as Deepmind. Still a lot we don’t know what to do – such as architecture for AI, little rigorous specification and testing. Orchestration of AI will change how we look at systems, teaching systems rather than programming them.

Fundamentals always apply: Abstraction, separation, responsibilities, simplicity. Process is iterative, incremental, continuous releases. Future: Orchestrating, architecture, edge/cloud, scale in the presence of untrusted components, dealing with the general product.

“Software is the invisible writing that whispers the stories of possibility to our hardware…” Software engineering allows us to build systems that are trusted.

Sources: https://twitter.com/Grady_Boochhttps://computingthehumanexperience.com/

A tour de Fry of technology evolution

There are many things to say about Stephen Fry, but enough is to show this video, filmed at Nokia Bell Labs, explaining, amongst other things, the origin of microchips, the power of exponential growth, the adventure and consequences of performance and functionality evolution. I am beginning to think that “the apogee, the acme, the summit of human intelligence” might actually be Stephen himself:

(Of course, the most impressive feat is his easy banter on hard questions after the talk itself. Quotes like: “[and] who is to program any kind of moral [into computers ]… If [the computer] dives into the data lake and learns to swim, which is essentially what machine learning is, it’s just diving in and learning to swim, it may pick up some very unpleasant sewage.”)

Big Data and analytics – briefly

DFDDODData and data analytics is becoming more and more important for companies and organizations. Are you wondering what data and data science might do for your company? Welcome to a three-day ESP (Executive Short Program) called Decisions from Data: Driving an Organization with Analytics. It will take place at BI Norwegian Business School from December 5-7 this year. The short course is an offshoot from our very popular executive programs Analytics for Strategic Management, which are fully booked. (Check this list (Norwegian) for a sense of what those students are doing.)

Decisions from Data is aimed at managers who are curious about Big Data and data science and wants an introduction and an overview, without having to take a full course. We will talk about and show various forms of data analysis, discuss the most important obstacles to becoming a data driven organization and how to deal with data scientists, and, of course, give lots of examples of how to compete with analytics. The course will not be tech heavy, but we will look at and touch a few tools, just to get an idea of what we are asking those data scientists to do.

The whole thing will be in English, because, well, the (in my humble opinion) best people we have on this (Chandler Johnson og Alessandra Luzzi) are from the USA and Italy, respectively. As for myself, I tag along as best I can…

Welcome to the data revolution – it start’s here!

Smarthelp: Locating and messaging passengers

 

If you are a public transportation company: How do you tell your prospective passengers that their travel plans may have to change?

Public transportation companies know a lot about their passengers’ travel patterns, but not as much as you would think – and, surprisingly, they know less now when ticket sales have been automated than they used to know before.

screen696x696

RuterBillett – a ticketing app

Let’s take a concrete company as an example: Ruter AS, the public transportation authority of Greater Oslo. Ruter is a publicly owned company that coordinates various suppliers of transportation services (bus, tram, train, some ferries) in the Oslo area. The company has been quite innovative in their use of apps, selling most of their tickets on the RuterBillett app, and having many of their customers plan their journey on the RuterReise app. The apps are very popular because they make it very easy both to figure out which bus or train to take, and to buy a ticket.

The company has a problem, though: While they know that someone bought a ticket on the ticketing app, they don’t know which particular bus, tram or other service the passenger took (a ticket typically gives you one hour of open travel on their services, no matter how many of them you use).

screen696x6961

RuterReise – a journey planning app

They could get some information from what people have been searching for, but the two apps are not linked, and they don’t know whether a passenger who searched for a particular route actually bought a ticket and did the journey – or not. There are many reasons for this lack of knowledge, but privacy issues – Norway has very strict laws on privacy – are important. Ruter does not want to track where its customers are travelling, at least not if it in any way involves identifying who a passenger actually is.

Not knowing where passengers are is a problem in many situations: It creates difficulties for dimensioning capacity, and it makes it difficult to communicate with passengers when something happens – such as a bus delay or cancellation.

Identifying travel patterns and communicating with passengers

The problem for Ruter is that they want to know where people are travelling (so they can figure out how many buses or trams they need to schedule), they ned to know who regularly takes certain journeys (so they know whom to send a message to if that route is not working) and they need to know who is in a certain area at a certain time (so they don’t send you a message about your bus being delayed if you are out of town, for instance). All of this is easy, except for one thing: Norway has very strict privacy laws – already quite similar to EU’s General Data Protection Regulation, which goes into effect in 2018 – and Ruter cares deeply about not being seen as a company that monitors where people travel.

In short, they need to know where you travel, but do not want to know who you are.

This is a seemingly impossible challenge, but Smarthelp Secure Infrastructure, in combination with Smart Decision Support, makes it possible. The communications platform creates an end-to-end encrypted communication channel between a central system and the smartphone. Using technology developed because we had to solve the problem of medical-level encrypted communication between emergency centers and individual users, Smarthelp has technology that allows someone to track specific information you allow access to – say, the fact that you are in a certain area, or that you regularly travel certain paths – without sharing other information, such as your name.

This would allow Ruter, when something happens, to send a message to people who a) regularly takes, say, bus route 85, and who b) is in an area where it is conceivable that they could take the bus, given their prior patterns, the time of day, and so on. For the individual passenger, this would mean that you only get pertinent messages – you don’t get messages about bus routes you don’t normally take (unless you actually get on the bus), and you don’t get messages when you are far enough from the bus that it is clear you are not going to take it anyway. In a world of information overload, this is extremely important – flood the user with many messages, and they do not read them.

The future of public transportation

A selective message and geolocation service, such as Smarthelp provides, is an evolutionary step, an optimization of the current way transportation is coordinated. In the long term (especially if we start to talk about seld-driving vehicles), the whole way we coordinate public transportation will change. As one Ruter employee told me: A public transportation company is “someone who takes you from a place you are not to a place you don’t want to go.”

The next step in public transportation is that the users tells the company not just that they want to get on the bus, but also where they want to go. I have been told that in an experiment, Telenor found that, one sunny summer afternoon, fully half of their employees (located at Fornebu outside Oslo) planned to go to Huk, a public beach on Bygdøy. The distance from Telenor’s headquarters at Fornebu is 10 minutes by car, but takes more than 30 minutes by public transportation, involving two bus routes. If Ruter had known about these travel plans, though, it could have just rolled up some buses and driven the employees directly, vastly improving the service – and avoiding clogging up the regular buses to Bygdøy.

And that is the future of public transportation: Instead of planning where you will go in terms of geography, you will tell the public transportation company where you want to go, and they will get you there. With self-driving cars, they will be able to tell you when you will be at your destination – but, perhaps, not willing to tell you the actual route. As a passenger, you probably will not care – after all, what matters to you is when you arrive, not by which route.

That would, in effect, mean that we have transitioned public transportation from line switching to packet switching, effectively turning the bus into the Internet. But that is for the future.

In the meantime, there is Smarthelp.


(I am on the board of Råd AS, a company that has developed the platform SmartHelp for Norwegian emergency services, allowing shared situational awareness, communication and privacy. The company is now seeking customers and collaborators outside this market.)

Smarthelp is a platform technology consisting of, at present, three elements: Smarthelp Rescue, an app for iPhone and Android that allows users to transmit their position to an emergency service; Smarthelp Decision Support, a decision support system which allows an operator to locate and communicate with users (both with the app and without), and Smarthelp Secure Infrastructure, a granularly encrypted communications platform for secure, private communication. If you want more information, please contact me or Fredrik Øvergård, CEO of SmartHelp.

SmartHelp: Locating employees in a crisis

If there is a crisis – do you know where your people are?

Imagine the situation: An event (terrorist attack, industrial accident, public transportation accident) of some proportion happens. Many people are hurt, lots of rumors abound, emergency services are responding. Almost immediately, the question arises: Are any of my employees affected by this – and do they need help?

At present, most organizations locate their employees by calling them or sending emails. This is slow and ineffective – when Norway was hit by a terrorist bomb in the Oslo city centre in 2011 during the summer holiday, it took one of the large newspapers more than two days of frantic telephoning to find all their employees. Most of the employees were, of course, just fine, but the company still had to locate them all. In such a situation, knowing who is not in danger quickly is very important, because it lets you concentrate resources on those who need help.

Smarthelp Decision Support, the emergency service communication platform, allows an organization to quickly – within minutes – determine where its employees are and whether they need help. Smarthelp does this while maintaining privacy of the individual employee.

Most large organizations have a system where employees register where they travel on business. For this service to work, the employee has to remember to update it, though for some companies, this happens automatically if they purchase their tickets through a specific travel agency. While this may help, people travel for pleasure, deviate from their itineraries, forget to register their travels, and purchase their tickets from the cheapest, rather than the official source. Consequently, nobody knows where they really are.

SmartHelp Decision Support (see picture) allows the company to set up a geographical area surrounding the event, and contact all their employees (based on lists of telephone numbers) to determine whether they are inside this area or not.

terroreksempel

Here is another example: You are responsible for security in a large company facility – say, an office building. The company receives a bomb threat which necessitates evacuating the building with thousands of employees. If the employees have SmartHelp on their phones, you can communicate with them all, and determine whether they (or at least their smartphones have left the building (limited by GPS accuracy). You can define a rallying point or area and get an automatic message as soon as someone enters the area, allowing you to quickly determine who is not accounted for. (At this point, GPS location – which we use – does not allow precise location inside a building, but that could change as WiFi locationing services get better.)

rumorsparisAnother advantage is information: In the November 2015 terrorist attack in Paris happened, there where (as is usual) lots of rumors circulating in the hundreds of thousands of Twitter messages and other social channels. With SmartHelp, the authorities would have been able to send targeted messages to specific areas, conveying a precise and autorative message across a cacophony of noise and misinformation.

SmartHelp works anywhere in the world where there is mobile reception (I have used it to signal my position to my host in Shanghai, for instance.) Privacy is handled through an ingenious cryptographic architecture that is secure and fast – the platform is certified for the medical information under the Norwegian data privacy laws, among the strictest in the world.

If you want more information, please contact me or Fredrik Øvergård, CEO of SmartHelp.


(I am on the board of Råd AS, a company that has developed the platform SmartHelp for Norwegian emergency services, allowing shared situational awareness, communication and privacy. The company is now seeking customers and collaborators outside this market.)

Smarthelp is a platform technology consisting of, at present, three elements: Smarthelp Rescue, an app for iPhone and Android that allows users to transmit their position to an emergency service; Smarthelp Decision Support, a decision support system which allows an operator to locate and communicate with users (both with the app and without), and Smarthelp Secure Infrastructure, a granularly encrypted communications platform for secure, private communication. If you want to see how the system works in a 911 central situation, see this video:

Made my day!

digøkskjermI just got the message that the new bachelor program Informatikk: Digital Økonomi og Ledelse (Informatics: Digital Economics and Management) is now the most sought-after study program in Norway, with 19 applicants per available place (514 first-priority applicants for 27 available places).

Since I have taken the initiative to this program and developed it with colleagues at the University of Oslo (where I have an adjunct position, this definitely made my day. Week, actually.

Just sayin’…

Notes from ACM Webinar on blockchain (etc.)

The Next Radical Internet Transformation: How Blockchain Technology is Transforming Business, Governments, Computing, and Security Models

Speaker: Mark Mueller-Eberstein, CEO & Founder at Adgetec Corporation, Professor at Rutgers University, Senior Research Fellow at QIIR

Moderator: Toufi Saliba, CEO, PrivacyShell and Chair of the ACM PB Conference Committee

Warning: These are notes taken live. Errors and omissions will occur. No responsibility whatsoever.

  • intro: old enough to remember the discussions in the early 90s about how the internet would change mail services – completely forgetting shopping, entertainment and others
  • Blockchain solves the problem of transferring value between Internet users without a third party
  • goes beyond the financial industry, can handle any kind of transaction
  • most of the world has access to a mobile phone, only about 20% has access to the banking system
  • Blockchain is the banking industry’s Uber movement
  • Blockchain much wider than Bitcoin, will facilitate new business models.
  • Blockchain transfers rather than copies digital assets, making sure there is only one instance of it.
    • settlement process: no clearing houses or central exchanges
    • peer-to-peer transfers, validation by network
  • Example: WeChat taking over payments in China, no link to banks
  • many commercial or government services are basically “databases” that are centrally managed, with one central point of failure
  • Blockchain allows a distributed ledger, information put in cannot be changed
    • Estonia thinking about a Blockchain in case of hacking or occupation
  • public (open), private and government blockchainsxx1
  • allows new services to existing customers, lots of inefficiencies up for grabs
    • estate records, voting, domain control, escrow, etc…
    • iPayYou allows use of Bitcoin
    • Walt Disney looking at Blockchain (DragonChain) for internal transfers, also use it for tracking supply chain to their cruise ships. Opensourced it.
  • 80% of Bitcoin mining done in China
  • regulation comes with a cost
  • Shenzhen want to be Blockchain Tech capital
  • 6-level security model, developed by William Mougayar (goes through it in detail: transaction, account, programming, distributed organizations, network (51% attacks, perhaps as low as 30%, smaller blockchains more vulnerable), governance)
  • Ethereum blockchain focusing on smart contracts: Hard forked in 2016, DAO issue where somebody hacked DAO code to siphon off money, hacking the program using the blockchain (not the blockchain),
  • credit card transaction can take up to 30 days, with disputes and everthing, Blockchain is almost instant
  • How “real” is blockchain technology
    • Goldman-Sachs invested $500m+
    • 15% of top global banks intend to roll out full-scale, commercial blockchain
    • etc.
  • what is holding it back?
    • difficult to use, understand, buy in; perception of risk and legality
    • difficult to see value for the individual
  • questions:
    • what are the incentives and adoption models?
      • different philosophies: computing power must be made available in the network: industrial mining vs. BitTorrent model, the amount of computing provided will be important, if we can find a model where just a little bit from every mobile phone is required
    • what are the hard costs of Blockchain?
      • you can google the costs. There are other approaches being developed, will post some links
    • can Blockchain be compromized by a virus?
      • theoretically, yes. Bitcoin is 10 years without, open source means verification (change is happening slowly because of code inspection)
      • comes back to incentive and governance model
  • and that was that…recording will be at webinar.acm.org in a few days.

Analytics for Strategic Management

I am starting a new executive course, Analytics for Strategic Management, with my young and very talented colleagues Alessandra Luzzi and Chandler Johnson (both with the Center for Digitization at BI Norwegian Business School).

alessandra

Alessandra Luzzi

chandler

Chandler Johnson

The course (over five modules) is aimed at managers who want to become sophisticated consumers of analytics (be it Big Data or the more regular kind). The idea is to learn just enough analytics that you know what to ask for, where the pressure points are (so you do not ask for things that cannot be done or will be prohibitively expensive). The participants will learn from cases, discussions, live examples and assignments.

Central to the course is a course analytics project, where the participants will seek out data from their own company (or, since it will be group work, someone else’s), figure out what you can do with the data, and end up, if not with a finished analysis (that might happen), at least with a well developed project specification.

The course will contain quite a bit of analytics – including a spot of Phython and R programming – again, so that the executives taking it will know what they are asking for and what is being done.

We were a bit nervous about offering this course – a technically oriented course with a February startup date. The response, however, has been excellent, with more than 20 students signed up already. In fact, wi will probably be capping the course at 30 participants, simply because it is the first time we are teaching it, and we are conscious that for the first time, 30 is more than enough, as we will be doing everything for the first time and undoubtedly change many things as we go along.

If you can’t do the course this year – here are a few stating pointers to whet your appetite:

  • Big Data is difficult to define. This is always the case with fashionable monikers – for instance, how big is “big”? – but good ol’ Wikipedia comes to the rescue, with an excellent introductory article on the concept. For me, Big Data has always been about having the entire data set instead of a sample (i.e., n = p), but I can certainly see the other dimensions of delineation suggested here.
  • Data analytics can be very profitable (PDF), but few companies manage to really mine their data for insights and actions. That’s great – more upside for those who really wants to do it!
  • Data may be big but often is bad, causing data scientists to spend most of their time fixing errors, cleaning things up and, in general, preparing for analytics rather than the analysis itself. Sometimes you can almost smell that the data is bad – I recommend The Quartz guide to bad data as a great list of indicators that something is amiss.
  • Data scientists are few, far between and expensive. There is a severe shortage of people with data analysis skills in Norway and elsewhere, and the educational systems (yours truly excepted, of course) is not responding. Good analysts are expensive. Cheap analysts – well, you get what you pay for. And, quite possibly, some analytics you may like, but not what you ought to get.
  • There is lots of data, but a shortage of models. Though you may have the data and the data scientists, that does not mean that you have good models. It is actually a problem that as soon as you have numbers – even though they are bad – they become a focal point for decision makers, who show a marked reluctance to asking where the data is coming from, what it actually means, and how the constructed models have materialised.

And with that – if you are a participant, I look forward to seeing you in February. If you are not – well, you better boogie over to BIs web pages and sign up.

Norway and self-driving cars

(This is a translation (with inevitable slight edits) from Norwegian of an op-ed Carl Störmer (who, in all fairness, had the idea) and I had in the Norwegian business newspaper Dagens Næringsliv.)

A self-driving future

Espen Andersen, BI Norwegian Business School and Carl Störmer, Jazzcode AS

Norway should become the world’s premier test laboratory for self-driving cars.

Norway needs to find new areas of development after oil – and we should go for something the whole world wants, where we have local advantages, and where we will develop deep and important knowledge even if the original idea does not succeed. We suggest that Norway should become the world’s premier test laboratory for self-driving cars – a “moon landing” we can develop far further than what we have been able to do from our expertise in sub-sea petroleum extraction.

1280px-tesla_model_s_26_x_side_by_side_at_the_gilroy_superchargerSelf-driving cars will do for personal transportation what e-mail has done for snail mail. Tesla-founder Elon Musk says Teslas will drive themselves in two hears – they already can change lanes and park themselves in your garage. The “summon“-function (a “come here”-command for your car) could, in principle, work across the entire USA.

An electrical self-driving vehicle will seldom par, choose the fastest or most economical route, always obey the traffic laws, and emit no pollutants. A society with self-driving cars can reduce the number of cars by 70-90%, free up about 30% more space in large cities, reduce traffic accidents by 90%, and drastically reduce local air pollution.

Google’s self-driving carsgoogle_self_driving_car_at_the_googleplex have driven several million kilometers without self-caused accidents, but there are still many technical problems left to solve. The cars work well in the well marked and carefully mapped roads of sunny California. The self-driving cars drive well, but the human drivers do not. But we cannot execute a sudden transition – for a long time, human and automated drivers will have to coexist.

Norway has unique advantages as a lab. In Norway, we can develop our own self-driving cars, but also be the first nation to really start using them. We do not have our own car industry to protect, we are quick to purchase and start to use new technologies, we are such a small country that decision paths are short, and should an international company make a marketing blunder in Norway, the damage will be limited to a very small market. We can easily change our laws to allow for testing of self-driving cars: Oslo, Bergen, Trondheim and Stavanger has enough traffic issues and large enough populations to suffice for a serious experiment. As a nation, we are focused on environmental issues, innovation and employment.

Norway’s bad road standard is an advantage. Norway has plenty of snow and ice, bad weather and bad roads. Today’s self-driving cars need clear road markings to be able to drive safely. But Norway has world leading capabilities in communication and coordination technology: The oil industry has learned how to continuously position ships in rough seas with an accuracy of about five centimeters. Telenor is a world-leading company in building robust mobile phone networks in complicated terrain. Technology developed for Norwegian conditions will work anywhere in the world.

Norway needs self-driving cars more than most nations. Norway is the world’s richest and most equal country, creating a modern welfare state through automation and technology-based productivity improvements. The transportation industry is over-ripe for automation. The technology can maintain productivity growth and offer a new life for many people – the blind, the old and the physically handicapped – who do not have access to cheap and simple transportation today. It will create many jobs – think before and after the smart phone here – that can be created based on abundant and cheap transportation.

Norway will win even if we don’t succeed. Lots of new technology has to be developed to make self-driving cars from experiment to production: For instance, software has to be developed that can handle extremely complicated situations when autonomous cars will have to share the road with tired human drivers. More importantly, lots of products and services can be built on top of self-driving cars, business models have to be developed, and many industries will be impacted. The insurance business, for instance, will have to adapt to a market with very few accidents. Even the donor organ market will be impacted – though traffic accident organs by no means make up the majority of organs available, there might be a shortage of available organs.

Norway has faced tremendous changes before. We have transited from being harvested ice to electric refrigertation (in the process enabling our large fishing and fish farming industries), from sail to steam shipping, from fixed line telephony to mobile phones. Our politicians have, quite wisely, created an electric car policy ensuring that we have the highest density of electric cars in the world (10% of all Teslas are sold in Norway.) Norway has everything to earn and very little to lose by going all in for self-driving cars.

Let’s do it!

Does someone have to die first?

double-classroomBlogpost for ACM Ubiquity, intro here:

Digital technology changes fast, and organizations change slowly: First using the technology as an automated, digitized version of the old way of doing things, then gradually understanding that in order to achieve productivity and functional breakthroughs. We need to leave the old metaphors behind. For this to happen, we need new mindsets, unfettered by the old way of using the technology. I wonder if my generation has the capability to do it.

Read the rest at ACM Ubiquity: Does someone have to die first?

Computational thinking notes

Grady_BoochNotes from Grady Booch‘ presentation on Computational Thinking, and ACM Webinar, February 3, 2016 (4617 people attended, in case you wondered.)

Note: This is real-time notetaking/thought-jotting. Lots of errors and misrepresentations. Deal with it.

This will be a different way of thinking – and perhaps to think differently about the profession of software development. Recommends Yuval Harari Sapiens, talks of the cognitive revolution, the agricultural revolution, the scientific revolution. Babbage as citizen scientist, begin to see a new way of thinking: Computational thought. Boole had a similar set of ideas, took it from mechanization to laws of thought – tries to investigate the operations of the mind by which reasoning is done.

I can’t shoe a horse, but can build a 3D rendering of one, and then produce a virtual horse in Avatar. Why? Our ways of thinking addresses what is necessary to survive in the world we live in. We have different relationship to time: With the cognitive revolution, we had slow ways of measuring time, such as seasons, the scientific revolution gave us theories of time – and a frantic obsession with ever smaller measures of time. If the ways of thinking we had in previous lives where appropriate then, what are the ways we should think now?

Jeanette Wing – introduced computational thinking in CACM: Computational thinking as the thought processes that are involved in formulating a problem and expressing a solution in a way so that a computer – human or machine – can carry it out. To be able to do that will be increasingly important to succeed in today’s world – it will help you shape the world and live in this world.

Computing started out as human computers (mostly women), then a gradual mechanization and, indeed, industrialization of computing with ever more rigid processes, eventually digitalization of it (via punch cards). Businesses gradually starting to reshape itself as a result of computational thinking – and businesses changing computation. Sciences beginning to use computational thinking. Around WWII it also began to change the ways we went to and won wars. (Again, many women, see the documentary “Top Secret Rosies“.) The computational thinking drove our imagination beyond what the computers could do, beyond what we do in the present.

In the 60s and 70s, computational thinking started to reshape society – but it was compartmentalized – the “programming priesthood.” The SAGE was one of the first personal computers, example of interfaces learning from war. Largest systems of its kind, forced us forward in UI, hardware and software. The 360 and others broke computational thinking out of the chosen few – Margaret Hamilton coined “software engineering”. Finally personal with the PC – representing a state change. omtroducing devices that forced people to think in computational ways, forcing us to adapt to the machine. Current state: Outsourcing part of our brains to smartphones – computers that happen to have an app for dialing – computational going from numerical to symbolic to imaginatined realities. Computational thinking is beginning to erode our thinking about old imagined realities, such as governments and organizations.

I think the idea of the singularity is fundamentally stupid – when and if it comes, we will have become computers ourselves anyway, according to Rodney Brooks. This forces us to think about what it is to be human. How does computational thinking change how we look at the world.

In terms of software development, the changes has been from mathematical to symbolic to imagined realities. We are not only building imagined realities, but stepping inside them and living in them.

The fundamental premise of science is that the cosmos is understandable; the fundamentalt premise of our domain is that the cosmos is computable. We enter the world with the understanding that anything we can dream, we can compute.

Gödel taught us that there are things that are unknowable, but that does not diminish the importance of scientific thinking. There are similar things that are uncomputable, the computational thinking is still powerful and can push the world forward. The scientific process suggest that we have a trajectory towards a simplified, standard model. In computation, we go the other way: Start with something simple and make it incredibly complex.

What does it mean to see the world as computable? The first assumption is that the cosmos is discrete, or at least computationally finite. I can make reasonable assumptions about reality that means I can do powerful things. It may not be totally, but near enough that it is useful.

Secondly, I assume the world is based on information, which means I can look at the world through data. DNA and cellular mechanisms can be computed. The lens of information allows us to derive powerful theories. The dark side what is happening in CRISPR, genetic manipulation without knowing the consequences. Incredible power but incredible responsibility.

Third, data is an abstraction of reality. We can use all these powerful tools, but in the end we are building an abstraction of the world. Can build them and begin to rely upon them, but the other side of computational thinking realizes that this is not reality, it is just our view of it. A model is a model.

We use algorithms to form abstractions, but now can hand over without waiting, because we can depend on our ability to generate an algorithm to represent the world. Look at BabyX, from University of Auckland.

The importance of scale, from Feynman‘s Room at the bottom article. But we can also build imaginary realities that are larger than the universe itself. Computing is universal – can be used everywhere, spreads to any manifestation of execution: computational physics, chemistry, biology, psychology, sociology … and gradually computational philosophy. Has spread itself in ways that has changed everything – but maybe this way of thinking is just the threshold of the next way of thinking?

The earliest ways of thinking evolved as a means of bringing more certainty and predictability to an uncertain and unpredictable world. Scientific thinking evoleved to understand the world. Computational thinking has evolved as a means of controlling the world at a level of fidelity once reserved for gods.

Computational thinking has changed how we look at the world. That is to be celebrated, and we should encourage non-programmers to understand how it works. But let’s not forget what it means to be human in this world.

Some questions:

  • Are we falling into the “modelling the world in terms of current technology” trap? Yes, let us be self-aware of the limits of this thinking. We are assuming that evolution is computation on DNA, but it is only an abstraction – what if there is something wrong and it is not the correct model. BTW: Nick Bostrom and intelligence – I disagree that computation can create life, but lets explore it.
  • How does new forms of teamwork (as with email) change our ability to solve problems? Not a sociologist, but fascinating that the same social structures show up in our imagined worlds. 10K years out? Don’t know, but some adaptation may have happened. No matter what, we need trust – the degree of trust forms the basis for any organization and what you can do with it. I believe that anything we do in this space is shaped by human need.
  • What about genetic programming – will computers be able of compuational thinking? First off – computers write their own programs now, including manipulating their environment. But most of the stuff in neural networks is dealing with the perception side of the world, we can’t go meta on those neural networks. Second – is the mind computable? Yes, I believe it is, but see one of the computing documentary we are making.
  • Can computing create art with meaning? Listen to the classical pianist Emily Howell, but Emily is an algorithm. Computers can create art, but we create our own meaning.
  • Does outsourcing your brain to smartphones inhibit our ability to do computational thinking. See Sherry Turkle, it does change our brain, refactors it. It is a dance between us and our devices, and that will continue for a long time.

Recording will be at learning.acm.org/webinar.