Summer reading for the diligent digital technology student

eivindgEivind Grønlund, one of my students at the Informatics: Digital Business and Leadership program at the University of Oslo sent me an email asking about what to read during the summer to prepare for the fall.

Well, I don’t believe in reading textbooks in the summer, I believe in reading things that will excite you and make you think about what you are doing and slightly derail you in a way that will make you a more interesting person when Fall comes. In other words, read whatever you want.

That being said, the students at DigØk have two business courses next year – one on organization and leadership, one on technology evolution and strategy. Both will have a a focus on basics, with a flavor of high tech and the software business. What can you read to understand that, without having to dig into textbooks or books that may be on the syllabus, like Leading DigitalThe Innovator’s SolutionEnterprise Architecture as Strategy, or Information Rules?

Here are four books that are entertaining and wise and will give you an understanding of how humans and technology interact and at least some of the difficulties you will run into trying to manage them – but in a non-schoolbook context. Just the thing for the beach, the mountain-top, the sailboat.

  • 816Neal Stephenson: Cryptonomicon. The ultimate nerd novel. A technology management friend of mine re-reads this book every summer. It involves history, magic reality (the character of Enoch Root), humor, startup lore, encryption and, well, fun. Several stories in one: About a group of nerds (main protagonist: Randy Waterhouse) doing a startup in Manila and other places 1999, his grandfather, Randall P. Waterhouse, running cryptographic warfare against the Germans and Japanese during WWII, and how the stories gradually intersect and come together towards the end. The gallery of characters is hilarious and fascinating, and you can really learn something about startups, nerd culture, programming, cryptography and history along the way. Highly recommended.
  • 7090Tracy Kidder: The Soul of a New Machine. This 1981 book describes the development process of a Data General minicomputer as a deep case study of the people in it. It could just as well have been written about any really advanced technology project today – the characters, the challenges, the little subcultures that develop within a highly focused team stretching the boundaries for what is possible. One of the best case studies ever written. If you want to understand how advanced technology gets made, this is it.
  • 24113Douglas Hofstadter: Gödel, Escher, Bach. This book (aficionados just call it GEB) was recommended to me by one of my professors in 1983, and is responsible for me wanting to be in academia and have time and occasion to read books such as this one. It is also one of the reasons I think The Matrix is a really crap movie – Hofstadter said it all before, and I figured out the plot almost at once and thought the whole thing a tiresome copycat. Hofstader writes about patterns, abstractions, the concept of meta-phenomena, but mostly the book is about self-referencing systems, but as with any good book that makes you think it is breath-taking in what it covers, pulling together music, art, philosophy and computer science (including a bit on encryption, always a favorite) and history. Not for the faint-hearted, but as Erling Iversen, my old boss and an extremely well-read man, said: You can divide techies into two kinds: Those who have read Hofstadter, and those who haven’t.
  • 34017076Tim O’Reilly: WTF? What’s the Future and Why It’s Up to Us. Tim is the founder of O’Reilly and Associates (the premier source of hands-on tech books for me) and has been a ringsider and a participant in anything Internet and digital tech since the nineties. This fairly recent book provides a good overview of the major evolutions and battles during the last 10-15 years and is a great catcher-upper for the young person who has not been been part of the revolution (so far.)

And with that – have a great summer!

Advertisements

The history of software engineering

grady_booch2c_chm_2011_2_cropped

The History of Software Engineering
an ACM webinar presentation by
ACM Fellow Grady Booch, Chief Scientist for Software Engineering, IBM Software
(PDF slides here.)

Note: These are notes taken while listening to this webinar. Errors, misunderstandings and misses aplenty…

(This is one of the perks of being a member of ACM – listening to legends of the industry talking about how it got started…)

Trust is fundamental – and we trust engineering because of licensing and certification. This is not true of software systems – and that leads us to software engineering. Checks and balances important – Hammurabi code of buildings, for instance. First licensed engineer was Charles Bellamy, in Wyoming, in 1907, largely because of former failures of bridges, boilers, dams, etc.

Systems engineering dates back to Bell labs, early 1940s, during WWII. In some states you can declare yourself a software engineer, in others licensing is required, perhaps because the industry is young. Computers were, in the beginning, human (mostly women). Stibitz coined digital around 1942, Tukey coined software in 1952. 1968-69 conference on software engineering coined the term, but CACM letter by Anthony Oettinger used the term in 1966, but the term was used before that (“systems software engineering”), most probably originated by Margaret Hamilton in 1963, working for Draper Labs.

Programming – art or science? Hopper, Dijkstra, Knuth, sees them as practical art, art, etc. Parnas distinguished between computer science and software engineering. Booch sees it as dealing with forces that are apparent when designing and building software systems. Good engineering based on discovery, invention, and implementation – and this has been the pattern of software engineering – dance between science and implementation.

Lovelace first programmer, algorithmic development. Boole and boolean algebra, implementing raw logic as “laws of thought”.

First computers were low cost assistants to astronomers, establishing rigorous processes for acting on data (Annie Cannon, Henrietta Leavitt.) Scaling of problems and automation towards the end of the 1800s – rows of (human) computers in a pipeline architecture. The Gilbreths created process charts (1921). Edith Clarke (1921) wrote about the process of programming. Mechanisation with punch cards (Gertrude Blanch, human computing, 1938; J Presper Eckert on punch car methods (1940), first methodology with pattern languages.

Digital methods coming – Stibitz, Von Neumann, Aitken, Goldstein, Grace Hopper with machine-independent programming in 1952, devising languages and independent algorithms. Colossus and Turing, Tommy Flowers on programmable computation, Dotthy du Boisson with workflow (primary operator of Colossus), Konrad Zuse on high order languages, first general purpose stored programs computer. ENIAC with plugboard programming, dominated by women, (Antonelli, Snyder, Spence, Teitelbaum, Wescoff). Towards the end of the war: Kilburn real-time (1948), Wilson and Gill subroutines (1949), Eckert and Mauchly with software as a thing of itself (1949). John Bacchus with imperative programming (Fortran, 1946), Goldstein and von Neumann flowcharts (1947). Commercial computers – Leo for a tea company in England. John Pinkerton creating operating system, Hoper with ALGOL and COBOL, reuse (Bener, Sammet). SAGE system important, command and control – Jay Forrester and Whirlwind 1951, Bob Evans (Sage, 1957), Strachey time sharing 1959, St Johnson with the first programming services company (1959).

Software crisis – not enough programmers around, machines more expensive than the humans, priesthood of programming, carry programs over and get results, batch. Fred Brooks on project management (1964), Constantin on modular programming (1968), Dijkstra on structured programming (1969). Formal systems (Hoare and Floyd) and provable programs; object orientation (Dahl and Nygaard, 1967). Main programming problem was complexity and productivity, hence software engineering (Margaret Hamilton) arguing that process should be managed.

Royce and the waterfall method (1970), Wirth on stepwise refinement, Parnas on information hiding, Liskov on abstract data types, Chen on entity-relationship modelling. First SW engineering methods: Ross, Constantine, Yourdon, Jackson, Demarco. Fagin on software inspection, Backus on functional programming, Lamport on distributed computing. Microcomputers made computing cheap – second generation of SW engineering: UML (Booch 1986), Rumbaugh, Jacobsen on use cases, standardization on UML in 1997, open source. Mellor, Yourdon, Worfs-Brock, Coad, Boehm, Basils, Cox, Mills, Humphrey (CMM), James Martin and John Zachman from the business side. Software engineering becomes a discipline with associations. Don Knuth (literate programming), Stallman on free software, Cooper on visual programming (visual basic).

Arpanet and Internet changed things again: Sutherland and SCRUM, Beck on eXtreme prorgamming, Fowler and refactoring, Royce on Rational Unified Process. Software architecture (Kruchten etc.), Reed Hastings (configuration management), Raymond on open source, Kaznik on outsourcing (first major contract between GE and India).

Mobile devices changed things again – Torvalds and git, Coplien and organiational patterns, Wing and computational thinking, Spolsky and stackoverflow, Robert Martin and clean code (2008). Consolidation into cloud: Shafer and Debois on devops (2008), context becoming important. Brad Cox and componentized structures, service-oriented architectures and APIs, Jeff Dean and platform computing, Jeff Bezos.

And here we are today: Ambient computing, systems are everywhere and surround us. Software-intensive systems are used all the time, trusted, and there we are. Computer science focused on physics and algorithms, software engineering on process, architecture, economics, organisation, HCI. SWEBOK first 2004, latest 2014, codification.

Mathematical -> Symbolic -> Personal -> Distributed & Connected -> Imagined Realities

Fundamentals -> Complexity -> HCI -> Scale -> Ethics and morals

Scale is important – risk and cost increases with size. Most SW development is like engineering a city, you have to change things in the presence of things that you can’t change and cannot change. AI changes things again – symbolic approaches and connectionist approaches, such as Deepmind. Still a lot we don’t know what to do – such as architecture for AI, little rigorous specification and testing. Orchestration of AI will change how we look at systems, teaching systems rather than programming them.

Fundamentals always apply: Abstraction, separation, responsibilities, simplicity. Process is iterative, incremental, continuous releases. Future: Orchestrating, architecture, edge/cloud, scale in the presence of untrusted components, dealing with the general product.

“Software is the invisible writing that whispers the stories of possibility to our hardware…” Software engineering allows us to build systems that are trusted.

Sources: https://twitter.com/Grady_Boochhttps://computingthehumanexperience.com/

Brilliance squared

Stephen Fry and Steven Pinker are two of the people I admire the most, for their erudition, extreme levels and variety of learning, and willingness to discuss their ideas. Having them both on stage at the same time, one interviewing the other (on the subject of Pinker’s last book, Enlightenment Now), is almost too much, but here they are:

(I did, for some reason, receive an invitation to this event, and would have gone there despite timing and expense if at all possible, but it was oversubscribed before I could clink the link. So thank whomever for Youtube, I say. It can be used to spread enlightenment, too.)

Interesting interview with Rodney Brooks

sawyer_and_baxterBoingboing, which is a fantastic source of interesting stuff to do during Easter vacation, has a long and fascinating interview by Rob Reid with Rodney Brooks, AI and robotics researcher and entrepreneur extraordinaire. Among the things I learned:

  • What the Baxter robot really does well – interacting with humans and not requiring 1/10 mm precision, especially when learning
  • There are not enough workers in manufacturing (even in China), most of the ones working there spend their time waiting for some expensive capital equipment to finish
  • The automation infrastructure is really old, still using PLCs that refresh and develop really slowly
  • Robots will be important in health care – preserving people’s dignity by allowing them to drive and stay at home longer by having robots that understand force and softness and can do things such as help people out of bed.
  • He has written an excellent 2018 list of dated predictions on the evolution of robotic and AI technologies, highly readable, especially his discussions on how to predict technologies and that we tend to forget the starting points. (And I will add his blog to my Newsblur list.)
  • He certainly doesn’t think much of the trolley problem, but has a great example to understand the issue of what AI can do, based on what Isaac Newton would think if he were transported to our time and given a smartphone – he would assume that it would be able to light a candle, for instance.

Worth a listen..

Beyond Default

71gkby-vpilDavid Trafford and Peter Boggis are those kinds of under-the-radar strategy consultants that ever so discreetly (and dare I say, in their inimitable British way) travel the world, advising enormous companies most civilians have never heard of about such issues as how to organise your internal departments so that they are capable of responding to technical change. (I should know, because I worked with them, first in CSC and then in the Concours Group, between 1994 and 2009.)

Now David and Peter have begot a book, Beyond Default, that provides a perspective on strategy and organisational change less built on fashionable frameworks than on solid experience. Their focus is on how organisations fail to see changes in their environment and develop strategies – real strategies – to adapt to them. The reasons are many, but most important is the fact that organisations have developed processes and measures to do what they currently do, and the focus on those particulars does not permit stepping back and seeing the bigger picture. Instead, companies carry on towards a “default” future – and, crucially, that future may be declining. Companies need to know what they don’t know and what they do not have the capabilities to do – and to acquire those capabilities when necessary. To do that, the authors advocate experiential learning – seeing for yourself what the future looks like by seeking it out, preferably as a group of managers from the same organisation experiencing and reflecting together.

The authors have a background as IT consultants, and it shows: They very much think of organisations as designed systems, with operating practices and (ideally) articulated operating principles. While eminently logical, this way of organising is hard to do – among other things, it requires thinking about organisations as tools for a purpose, and that purpose has to be articulated in a way that gives direction to its members. Thinking about your principles can make you articulate purpose, but it is very hard not to make the whole process a bit self-referential. Perhaps the key, like for Newton’s second law of thermodynamics, is to keep adding external energy, constantly identifying and understanding ramifications of technical and other change – a process that requires energy, if nothing else.

Both authors care about language and explaining and discussing what happens in a way that can be understood by the organisations they are trying to help. This means that they primarily use examples and stories, rather than frameworks (beyond simple illustrations), to convey their points. They end each chapter with a set of questions the reader can has him- or herself about the organisations they manage – and do not, in any way, try to offer simple solutions. As such, the book works best when it talks about how to explain strategic necessities and start on a strategic journey – through collective leadership, not “great man” charisma. It works less well when trying to explain strategic analysis, perhaps because the authors have too much experience to settle on a simple, all-encompassing method.

Well worth the read, not least for the senior executive trying to understand a new world and wanting an explanation held in a language that fosters understanding rather than just excitement.

Neural networks – explained

As mentioned here a few times, I teach an executive course called Analytics for strategic management, as well as a short program (three days) called Decisions from Data: Driving an Organization on Analytics. We have just finished the first version of both of these courses, and it has been a very enjoyable experience. The students (in both courses) have been interested and keen to learn, bringing relevant and interesting problems to the table, and we have managed do what it said on the tin (I think) – make them better consumers of analytics, capable of having a conversation with the analytics team, employing the right vocabulary and being able to ask more intelligent questions.

Of course, programs of this type does not allow you do dive deep into how things work, though we have been able to demonstrate MySQL, Python and DataRobot, and also give the students an understanding of how rapidly these things are evolving. We have talked about deep learning, for instance, but not how it works.

But that is easy to fix – almost everything about machine learning is available on Youtube and in other web channels, once you are into a little bit of the language. For instance, to understand how deep learning works, you can check out a series of videos from Grant Sanderson, who produces very good educational videos on the web site 3 blue one brown.

(There are follow-up videos: Chapter 2, Chapter 3, and Chapter 3 (formal calculus appendix). This Youtube channel has a lot of other math-related videos, too, including a great explanation of how Bitcoin works, which I’ll have to get into at some points, since I keep being asked why I don’t invest in Bitcoin all the time.)

Of course, you have to be rather interested to dive into this, and it certainly is not required read for an executive who only wants to be able to talk intelligently to the analytics team. But it is important (and a bit reassuring) to note the mechanisms employed: Breaking a very complex problem up into smaller problems, breaking those up into even smaller problems. solving the small problems by programming, then stepping back up. For those of you with high school math: It really isn’t that complicated. Just complicated in layers.

And it is good to know that all this advanced AI stuff really is rather basic math. Just applied in an increasingly complex way, really fast.

Analytics projects

asm_topTogether with Chandler Johnson and Alessandra Luzzi, I currently teach a course called Analytics for Strategic Management. In this course (now in its second iteration), executive students work on real projects for real companies, applying various forms of machine learning (big data, analytics, whatever you want to call it) to business problems. We have just finished the second of five modules, and the projects are now defined.

Here is a (mostly anonymised) list:

  • The Agency for Public Management and eGovernment (Difi) wants to understand and predict which citizens are likely to reserve themselves against electronic communications from the government. The presumption is that these people may be mostly old, not on electronic media, or in other ways digitally unsophisticated – but that may not be true, so they want to find out.
  • An electric power distribution company wants to investigate power imbalances in the electric grid: In the electric grid, production has to match consumption at all times, or you will get (sometimes rather large) price fluctuations. Can they predict when imbalances (more consumption that production, for instance) will occur, so that they can adjust accordingly?
  • A company in the food and beverage industry want to offer recommendations to their (business) customers: When you order products from them, how can they suggest other products that may either sell well or differentiate the customer from the competition?
  • A petroleum producing company wants to predict unintended shutdowns and slowdowns in their production infrastructure. Such problems are costly and risky, but predictions are difficult because they are rather rare – and that creates difficulties with unbalanced data sets.
  • A major bank wants to look into the security profiles of their online customers and investigate whether some customers are less likely to be exposed to security risks (and therefore may be able to use less cumbersome security procedures than others).
  • An insurance company wants to investigate which of their new customers are likely to leave them (churn analysis) – and why. They want to find them early, while there is still time to do something to make them stay.
  • A ship management company wants to investigate the use of certain types of oil and optimise the delivery and use of it. (Though the oil is rather specialised, the ships are large and the expense significant.)
  • Norsk Tipping runs a service helping people who are in danger of becoming addicted to gaming, an important part of their societal responsibility which they take very seriously. They want to identify which of their customers are most likely to benefit from intervention. This is a rather tricky and interesting problem – you need to identify not only those who are likely to become addicted, but also make a judgement as to whether the intervention (of which there is limited capacity) is likely to help.
  • A major health club chain wants to identify customers who are not happy with their services, and they want to find them early, so they can make offers to activate them and make them stay.
  • A regional bank wants to identify customers who are about to leave them, particularly those who want to move their mortgage somewhere else. (This is also a problem of unbalanced data sets, since most customers stay.)
  • A major electronic goods retailer wants to do market basket analysis to be able to recommend and stock products that customers are likely to buy together with others.

All in all, a fairly typical set of examples of the use of machine learning and analytics in business – and I certainly like to work with practical examples with very clearly defined benefits. Now – a small matter of implementation!