Category Archives: The thoughtful manager

On videoconferencing and security

Picture: Zoom

Yesterday began with a message from a business executive who was concerned with the security of Zoom, the video conferencing platform that many companies (and universities) have landed on. The reason was a newspaper article regurgitating several internet articles, partly about functionality that has been adequately documented by Zoom, partly about security holes that have been fixed a long time ago.

So is there any reason to be concerned about Zoom or Whereby or Teams or Hangouts or all the other platforms?

My answer is “probably not” – at least not for the security holes discussed here, and for ordinary users (and that includes most small- to medium sized companies I know about).

It is true that video conferencing introduces some security and privacy issues, but if we look at it realistically, the biggest problem is not the technology, but the people using it (Something we nerds refer to as PEBKAC – Problem Exists Between Keyboard and Chair.)

When a naked man sneaks into an elementary school class via Whereby, as happened a few days ago here in Norway, it is not due to technology problems, but because the teacher had left the door wide open, i.e., had not turned on the function that makes it necessary to “knock” and ask for permission to enter.

When anyone can record (and have the dialogue automatically transcribed) from Zoom, it is because the host has not turned off the recording feature. By the way, anyone can record a video conference with screen capture software (such as Camtasia), a sound recorder or for that matter a cell phone, and no (realistic) security system in the world can do anything about it.

When the boss can monitor that people are not using other software while sitting in a meeting (a feature that can be completely legitimate in a classroom, it is equivalent to the teacher looking beyond the class to see if the students are awake), well, I don’t think the system is to blame for that either. Any leader who holds such irrelevant meetings that people do not bother to pay attention should rethink their communications strategy. Any executive I know would have neither time nor interest in activating this feature – because if you need technology to force people to wake up, you don’t have a problem technology can solve.

The risk of a new tool should not be measured against some perfect solution, but against what the alternative is if you don’t have it. Right now, video conferencing is the easiest and best tool for many – so that is why we use it. But we have to take the trouble to learn how it works. The best security system in the world is helpless against people writing their password on a Post-It, visible when they are in videoconference.

So, therefore – before using the tool – take a tour of the setup page, choose carefully what features you want to use, and think through what you want to achieve by having the meeting.

If that’s hard, maybe you should cancel the whole thing and send an email instead.

Getting dialogue online

Bank in the nineties, I facilitated a meeting with Frank Elter at a Telenor video meeting room in Oslo. There were about 8 participants, and an invited presenter: Tom Malone from MIT.

The way it was set up, we first saw a one hour long video Tom had created, where he gave a talk and showed some videos about new ways of organizing work (one of the more memorable sequences was (a shortened version of) the four-hour house video.) After seeing Tom’s video, we spent about one hour discussing some of the questions Tom had raised in the video. Then Tom came on from a video conferencing studio in Cambridge, Massachusetts, to discuss with the participants.

The interesting thing, to me, was that the participants experienced this meeting as “three hours with Tom Malone”. Tom experienced it as a one hour discussion with very interested and extremely well prepared participants.

A win-win, in other words.

I was trying for something similar yesterday, guest lecturing in Lene Pettersen‘s course at the University of Oslo, using Zoom with early entry, chat, polling and all video/audio enabled for all participants. This was the first videoconference lecture for the students and for three of my colleagues, who joined in. In preparation, the students had read some book chapters and articles and watched my video on technology evolution and disruptive innovations.

For the two hour session, I had set up this driving plan (starting at 2 pm, or 14:00 as we say over here in Europe…):

Image may contain: Espen Andersen, eyeglasses

Leading the discussion. Zoom allows you to show a virtual background, so I chose a picture of the office I would have liked to have…

14:00 – 14:15 Checking in, fiddling with the equipment and making sure everything worked. (First time for many of the users, so have the show up early so technical issues don’t eat into the teaching time.)
14:15 – 14:25 Lene introduces the class, talks about the rest of the course and turns over to Espen (we also encouraged the students to enter questions they wanted addressed in the chat during this piece)
14:25 – 14:35 Espen talking about disruption and technology-driven strategies.
14:35 – 14:55 Students into breakout rooms – discussing whether video what it would take for video and digital delivery to be a disruptive innovation for universities. (Breaking students up into 8 rooms of four participants, asking them to nominate a spokesperson to take notes and paste them into the chat when they return, and to discuss the specific question: What needs to happen for COVID-19 to cause a disruption of universities, and how would such a disruption play out?
14:55 – 15:15 Return to main room, Espen sums up a little bit, and calls on spokesperson from each group (3 out of 8 groups) based on the notes posted in the chat (which everyone can see). Espen talks about the Finn.no case and raises the next discussion question.
15:15 – 15:35 Breakout rooms, students discuss the next question: What needs to happen for DNB (Norway’s largest bank) to become a data-driven, experiment-oriented organization like Finn.no? What are the most important obstacles and how should they be dealt with?
15:35 – 15:55 Espen sums up the discussion, calling on some students based on the posts in the chat, sums up.
15:55 – 16:00 Espen hand back to Lene, who sums up. After 16:00, we stayed on with colleagues and some of the students to discuss the experience.

zoom dashboard

The dashboard as I saw it. Student names obscured.

Some reflections (some of these are rather technical, but they are notes to myself):

  • Not using Powerpoint or a shared screen is important. Running Zoom in Gallery view (I had set it up so you could see up to 49 at the same time) and having the students log in to Zoom and upload a picture gave a feeling of community. Screen and/or presentation sharing breaks the flow for everyone – When you do it in Zoom, the screen reconfigures (as it does when you come back from a breakout room) and you have to reestablish the participant panel and the chat floater. Instead, using polls and discussion questions and results communicated through the chat was easier for everyone (and way less complicated).
  • No photo description available.

    Satisfactory results, I would say.

    I used polls on three occasions: Before each discussion breakout, and in the end to ask the students what the experience was like. They were very happy about it and had good pointers on how to make it better

  • We had no performance issues and rock-steady connection the whole way through.
  • It should be noted that the program is one of the most selective in Norway and the students are highly motivated and very good. During the breakout sessions I jumped into each room to listen in on the discussion (learned that it was best to pause recording to avoid a voice saying “This session is being recorded” as I entered. The students were actively discussing in every group, with my colleagues (Bendik, Lene, and Katja) also participating. I had kept the groups to four participants, based on feedback from a session last week, where the students had been 6-7 and had issues with people speaking over each other.
  • Having a carefully written driving plan was important, but still, it was a very intense experience, I was quite exhausted afterwards. My advice on not teaching alone stands – in this case, I was the only one with experience, but that will change very fast. But I kept feeling rushed and would have liked more time, especially in the summary sections, would have liked to bring more students in to talk.
  • I did have a few breaks myself – during the breakout sessions – to go to the bathroom and replenish my coffee – but failed to allow for breaks for the students. I assume they managed to sneak out when necessary (hiding behind a still picture), but next time, I will explicitly have breaks, perhaps suggest a five minute break in the transition from main room to breakout rooms.

Conclusion: This can work very well, but I think it is important to set up each video session based on what you want to use it for: To present something, to run an exercise, to facilitate interaction. With a small student group like this, I think interaction worked very well, but it requires a lot of presentation. You have to be extremely conscious of time – I seriously think that any two-hour classroom session needs to be rescheduled to a three hour session just because the interaction is slower, and you need to have breaks.

As Winston Churchill almost said (he said a lot, didn’t he): We make our tools, and then our tools make us. We now have the tools, it will be interesting to see how the second part of this transition plays out.

Summer reading for the diligent digital technology student

eivindgEivind Grønlund, one of my students at the Informatics: Digital Business and Leadership program at the University of Oslo sent me an email asking about what to read during the summer to prepare for the fall.

Well, I don’t believe in reading textbooks in the summer, I believe in reading things that will excite you and make you think about what you are doing and slightly derail you in a way that will make you a more interesting person when Fall comes. In other words, read whatever you want.

That being said, the students at DigØk have two business courses next year – one on organization and leadership, one on technology evolution and strategy. Both will have a a focus on basics, with a flavor of high tech and the software business. What can you read to understand that, without having to dig into textbooks or books that may be on the syllabus, like Leading DigitalThe Innovator’s SolutionEnterprise Architecture as Strategy, or Information Rules?

Here are four books that are entertaining and wise and will give you an understanding of how humans and technology interact and at least some of the difficulties you will run into trying to manage them – but in a non-schoolbook context. Just the thing for the beach, the mountain-top, the sailboat.

  • 816Neal Stephenson: Cryptonomicon. The ultimate nerd novel. A technology management friend of mine re-reads this book every summer. It involves history, magic reality (the character of Enoch Root), humor, startup lore, encryption and, well, fun. Several stories in one: About a group of nerds (main protagonist: Randy Waterhouse) doing a startup in Manila and other places 1999, his grandfather, Randall P. Waterhouse, running cryptographic warfare against the Germans and Japanese during WWII, and how the stories gradually intersect and come together towards the end. The gallery of characters is hilarious and fascinating, and you can really learn something about startups, nerd culture, programming, cryptography and history along the way. Highly recommended.
  • 7090Tracy Kidder: The Soul of a New Machine. This 1981 book describes the development process of a Data General minicomputer as a deep case study of the people in it. It could just as well have been written about any really advanced technology project today – the characters, the challenges, the little subcultures that develop within a highly focused team stretching the boundaries for what is possible. One of the best case studies ever written. If you want to understand how advanced technology gets made, this is it.
  • 24113Douglas Hofstadter: Gödel, Escher, Bach. This book (aficionados just call it GEB) was recommended to me by one of my professors in 1983, and is responsible for me wanting to be in academia and have time and occasion to read books such as this one. It is also one of the reasons I think The Matrix is a really crap movie – Hofstadter said it all before, and I figured out the plot almost at once and thought the whole thing a tiresome copycat. Hofstader writes about patterns, abstractions, the concept of meta-phenomena, but mostly the book is about self-referencing systems, but as with any good book that makes you think it is breath-taking in what it covers, pulling together music, art, philosophy and computer science (including a bit on encryption, always a favorite) and history. Not for the faint-hearted, but as Erling Iversen, my old boss and an extremely well-read man, said: You can divide techies into two kinds: Those who have read Hofstadter, and those who haven’t.
  • 34017076Tim O’Reilly: WTF? What’s the Future and Why It’s Up to Us. Tim is the founder of O’Reilly and Associates (the premier source of hands-on tech books for me) and has been a ringsider and a participant in anything Internet and digital tech since the nineties. This fairly recent book provides a good overview of the major evolutions and battles during the last 10-15 years and is a great catcher-upper for the young person who has not been been part of the revolution (so far.)

And with that – have a great summer!

The history of software engineering

grady_booch2c_chm_2011_2_cropped

The History of Software Engineering
an ACM webinar presentation by
ACM Fellow Grady Booch, Chief Scientist for Software Engineering, IBM Software
(PDF slides here.)

Note: These are notes taken while listening to this webinar. Errors, misunderstandings and misses aplenty…

(This is one of the perks of being a member of ACM – listening to legends of the industry talking about how it got started…)

Trust is fundamental – and we trust engineering because of licensing and certification. This is not true of software systems – and that leads us to software engineering. Checks and balances important – Hammurabi code of buildings, for instance. First licensed engineer was Charles Bellamy, in Wyoming, in 1907, largely because of former failures of bridges, boilers, dams, etc.

Systems engineering dates back to Bell labs, early 1940s, during WWII. In some states you can declare yourself a software engineer, in others licensing is required, perhaps because the industry is young. Computers were, in the beginning, human (mostly women). Stibitz coined digital around 1942, Tukey coined software in 1952. 1968-69 conference on software engineering coined the term, but CACM letter by Anthony Oettinger used the term in 1966, but the term was used before that (“systems software engineering”), most probably originated by Margaret Hamilton in 1963, working for Draper Labs.

Programming – art or science? Hopper, Dijkstra, Knuth, sees them as practical art, art, etc. Parnas distinguished between computer science and software engineering. Booch sees it as dealing with forces that are apparent when designing and building software systems. Good engineering based on discovery, invention, and implementation – and this has been the pattern of software engineering – dance between science and implementation.

Lovelace first programmer, algorithmic development. Boole and boolean algebra, implementing raw logic as “laws of thought”.

First computers were low cost assistants to astronomers, establishing rigorous processes for acting on data (Annie Cannon, Henrietta Leavitt.) Scaling of problems and automation towards the end of the 1800s – rows of (human) computers in a pipeline architecture. The Gilbreths created process charts (1921). Edith Clarke (1921) wrote about the process of programming. Mechanisation with punch cards (Gertrude Blanch, human computing, 1938; J Presper Eckert on punch car methods (1940), first methodology with pattern languages.

Digital methods coming – Stibitz, Von Neumann, Aitken, Goldstein, Grace Hopper with machine-independent programming in 1952, devising languages and independent algorithms. Colossus and Turing, Tommy Flowers on programmable computation, Dotthy du Boisson with workflow (primary operator of Colossus), Konrad Zuse on high order languages, first general purpose stored programs computer. ENIAC with plugboard programming, dominated by women, (Antonelli, Snyder, Spence, Teitelbaum, Wescoff). Towards the end of the war: Kilburn real-time (1948), Wilson and Gill subroutines (1949), Eckert and Mauchly with software as a thing of itself (1949). John Bacchus with imperative programming (Fortran, 1946), Goldstein and von Neumann flowcharts (1947). Commercial computers – Leo for a tea company in England. John Pinkerton creating operating system, Hoper with ALGOL and COBOL, reuse (Bener, Sammet). SAGE system important, command and control – Jay Forrester and Whirlwind 1951, Bob Evans (Sage, 1957), Strachey time sharing 1959, St Johnson with the first programming services company (1959).

Software crisis – not enough programmers around, machines more expensive than the humans, priesthood of programming, carry programs over and get results, batch. Fred Brooks on project management (1964), Constantin on modular programming (1968), Dijkstra on structured programming (1969). Formal systems (Hoare and Floyd) and provable programs; object orientation (Dahl and Nygaard, 1967). Main programming problem was complexity and productivity, hence software engineering (Margaret Hamilton) arguing that process should be managed.

Royce and the waterfall method (1970), Wirth on stepwise refinement, Parnas on information hiding, Liskov on abstract data types, Chen on entity-relationship modelling. First SW engineering methods: Ross, Constantine, Yourdon, Jackson, Demarco. Fagin on software inspection, Backus on functional programming, Lamport on distributed computing. Microcomputers made computing cheap – second generation of SW engineering: UML (Booch 1986), Rumbaugh, Jacobsen on use cases, standardization on UML in 1997, open source. Mellor, Yourdon, Worfs-Brock, Coad, Boehm, Basils, Cox, Mills, Humphrey (CMM), James Martin and John Zachman from the business side. Software engineering becomes a discipline with associations. Don Knuth (literate programming), Stallman on free software, Cooper on visual programming (visual basic).

Arpanet and Internet changed things again: Sutherland and SCRUM, Beck on eXtreme prorgamming, Fowler and refactoring, Royce on Rational Unified Process. Software architecture (Kruchten etc.), Reed Hastings (configuration management), Raymond on open source, Kaznik on outsourcing (first major contract between GE and India).

Mobile devices changed things again – Torvalds and git, Coplien and organiational patterns, Wing and computational thinking, Spolsky and stackoverflow, Robert Martin and clean code (2008). Consolidation into cloud: Shafer and Debois on devops (2008), context becoming important. Brad Cox and componentized structures, service-oriented architectures and APIs, Jeff Dean and platform computing, Jeff Bezos.

And here we are today: Ambient computing, systems are everywhere and surround us. Software-intensive systems are used all the time, trusted, and there we are. Computer science focused on physics and algorithms, software engineering on process, architecture, economics, organisation, HCI. SWEBOK first 2004, latest 2014, codification.

Mathematical -> Symbolic -> Personal -> Distributed & Connected -> Imagined Realities

Fundamentals -> Complexity -> HCI -> Scale -> Ethics and morals

Scale is important – risk and cost increases with size. Most SW development is like engineering a city, you have to change things in the presence of things that you can’t change and cannot change. AI changes things again – symbolic approaches and connectionist approaches, such as Deepmind. Still a lot we don’t know what to do – such as architecture for AI, little rigorous specification and testing. Orchestration of AI will change how we look at systems, teaching systems rather than programming them.

Fundamentals always apply: Abstraction, separation, responsibilities, simplicity. Process is iterative, incremental, continuous releases. Future: Orchestrating, architecture, edge/cloud, scale in the presence of untrusted components, dealing with the general product.

“Software is the invisible writing that whispers the stories of possibility to our hardware…” Software engineering allows us to build systems that are trusted.

Sources: https://twitter.com/Grady_Boochhttps://computingthehumanexperience.com/

Brilliance squared

Stephen Fry and Steven Pinker are two of the people I admire the most, for their erudition, extreme levels and variety of learning, and willingness to discuss their ideas. Having them both on stage at the same time, one interviewing the other (on the subject of Pinker’s last book, Enlightenment Now), is almost too much, but here they are:

(I did, for some reason, receive an invitation to this event, and would have gone there despite timing and expense if at all possible, but it was oversubscribed before I could clink the link. So thank whomever for Youtube, I say. It can be used to spread enlightenment, too.)

Interesting interview with Rodney Brooks

sawyer_and_baxterBoingboing, which is a fantastic source of interesting stuff to do during Easter vacation, has a long and fascinating interview by Rob Reid with Rodney Brooks, AI and robotics researcher and entrepreneur extraordinaire. Among the things I learned:

  • What the Baxter robot really does well – interacting with humans and not requiring 1/10 mm precision, especially when learning
  • There are not enough workers in manufacturing (even in China), most of the ones working there spend their time waiting for some expensive capital equipment to finish
  • The automation infrastructure is really old, still using PLCs that refresh and develop really slowly
  • Robots will be important in health care – preserving people’s dignity by allowing them to drive and stay at home longer by having robots that understand force and softness and can do things such as help people out of bed.
  • He has written an excellent 2018 list of dated predictions on the evolution of robotic and AI technologies, highly readable, especially his discussions on how to predict technologies and that we tend to forget the starting points. (And I will add his blog to my Newsblur list.)
  • He certainly doesn’t think much of the trolley problem, but has a great example to understand the issue of what AI can do, based on what Isaac Newton would think if he were transported to our time and given a smartphone – he would assume that it would be able to light a candle, for instance.

Worth a listen..

Neural networks – explained

As mentioned here a few times, I teach an executive course called Analytics for strategic management, as well as a short program (three days) called Decisions from Data: Driving an Organization on Analytics. We have just finished the first version of both of these courses, and it has been a very enjoyable experience. The students (in both courses) have been interested and keen to learn, bringing relevant and interesting problems to the table, and we have managed do what it said on the tin (I think) – make them better consumers of analytics, capable of having a conversation with the analytics team, employing the right vocabulary and being able to ask more intelligent questions.

Of course, programs of this type does not allow you do dive deep into how things work, though we have been able to demonstrate MySQL, Python and DataRobot, and also give the students an understanding of how rapidly these things are evolving. We have talked about deep learning, for instance, but not how it works.

But that is easy to fix – almost everything about machine learning is available on Youtube and in other web channels, once you are into a little bit of the language. For instance, to understand how deep learning works, you can check out a series of videos from Grant Sanderson, who produces very good educational videos on the web site 3 blue one brown.

(There are follow-up videos: Chapter 2, Chapter 3, and Chapter 3 (formal calculus appendix). This Youtube channel has a lot of other math-related videos, too, including a great explanation of how Bitcoin works, which I’ll have to get into at some points, since I keep being asked why I don’t invest in Bitcoin all the time.)

Of course, you have to be rather interested to dive into this, and it certainly is not required read for an executive who only wants to be able to talk intelligently to the analytics team. But it is important (and a bit reassuring) to note the mechanisms employed: Breaking a very complex problem up into smaller problems, breaking those up into even smaller problems. solving the small problems by programming, then stepping back up. For those of you with high school math: It really isn’t that complicated. Just complicated in layers.

And it is good to know that all this advanced AI stuff really is rather basic math. Just applied in an increasingly complex way, really fast.