Category Archives: Academically speaking

Stringing those dimensions together…

This video tries to do something very difficult: Explain dimensions beyond the four we are used to. And does a good job of it.

(And to my students – watch this video after having read Neal Stephenson’s In the beginning was … the command line, as an introduction to the course on technology strategy.)

(Via Cory)

Plagiarism showcased – and a call for action

image I hate plagiarism, partially because it has happened to me, partially because I publish way too little because I overly self-criticize for lack of original thinking, partly because I have had it happen with quite a few students and am getting more and more tired of having to explain even to executive students with serious job experience that clipping somebody else’s text and showing it as your own is not permissible – this year, I even had a student copy things out of Wikipedia and argue that it wasn’t plagiarism because Wikipedia is not copyrighted.

I suspect plagiarism is a bigger problem than we think. The most recent spat is noted in Boing Boing – read the comments if you want a good laugh and some serious discussion. (My observation, not particularly original: Even if this thing wasn’t plagiarized, isn’t this rather thin for a doctoral dissertation?)

The thing is, plagiarism will come back to bite you, and with the search tools out there, I can see a point in a not too distant future where all academic articles ever published will be fed into a plagiarism checker, with very interesting results. Quite a few careers will end, no doubt after much huffing and puffing. Johannes Gehrke and friends at Cornell have already done work on this for computer science articles – I just can’t wait to see what will come out of tools like these when they really get cranking. I seem to remember Johannes as saying that most people don’t plagiarize, but that a few seem to do it quite a lot.

It is high time we turn the student control protocols loose on published academic work as well. Nothing like a many eyeballs to dig out that shallowness….

From links to seeds: Edging towards the semantic web

Wolfram Alpha just may take us one step closer to the elusive Semantic Web, by evolving a communication protocol out of its query terms.

(this is very much in ruminating form – comments welcome)

Wolfram Alpha officially launched on May 18, an exciting new kind of "computational" search engine which, rather than looking up documents where your questions have been answered before, actually computes the answer. The difference, as Stephen Wolfram himself has said, is that if you ask what the distance is to the moon, Google and other search engines will find you documents that tells you the average distance, whereas Wolfram Alpha will calculate what the distance is right now, and tell you that, in addition to many other facts (such as the average). Wolfram Alpha does not store answers, but creates them every time. And it does primarily answer numerical, computable questions.

The difference between Google (and other search engines) and Wolfram Alpha is not so clear-cut, of course. If you ask Google "17 mpg in liters per 100km" it will calculate the result for you. And you can send Wolfram Alpha non-computational queries such as "Norway" and it will give an informational answer. The difference lies more in what kind of data the two services work against, and how they determine what to show you: Google crawls the web, tracking links and monitoring user responses, in a sense asking every page and every user of their services what they think about all web pages (mostly, of course, we don’t think anything about most of them, but in principle we do.) Wolfram Alpha works against a database of facts with a set of defined computational algorithms – it stores less and derives more. (That being said, they will both answer the question "what is the answer to life, the universe and everything" the same way….)

While the technical differences are important and interesting, the real difference between WA and Google lies in what kind of questions they can answer – to use Clayton Christensen’s concept, the different jobs you would hire them to do. You would hire Google to figuring out information, introduction, background and concepts – or to find that email you didn’t bother filing away in the correct folder. You would hire Alpha to answer precise questions and get the facts, rather than what the web collectively has decided is the facts.

The meaning of it all

Now – what will the long-term impact of Alpha be? Google has made us replace categorization with search – we no longer bother filing things away and remembering them, for we can find them with a few half-remembered keywords, relying on sophisticated query front-end processing and the fact that most of our not that great minds think depressingly alike. Wolfram Alpha, on the other hand, is quite a different animal. Back in the 80s, I once saw someone exhort their not very digital readers to think of the personal computer as a "friendly assistant who is quite stupid in everything but mathematics."  Wolfram Alpha is quite a bit smarter than that, of course, but the fact is that we now have access to this service which, quite simply, will do the math and look up the facts for us. Our own personal Hermione Granger, as it is.

I think the long-term impact of Wolfram Alpha will be to further something that may not have started with Google, but certainly became apparent with them: The use of search terms (or, if you will, seeds) as references. It is already common to, rather than writing out a URL, to help people find something by saying "Google this and you will find it". I have a couple of blogs and a web page, but googling my name will get you there faster (and you can misspell my last name and still not miss.) The risk in doing that, of course, is that something can intervene. As I read (in this paper) General Motors, a few years ago, had an ad for a new Pontiac model, at the end of which they exhorted the audience to "Google Pontiac" to find out more. Mazda quickly set up a web page with Pontiac in it, bought some keywords on Google, and quite literally Shanghaied GM’s ad.

Wolfram Alpha, on the other hand, will, given the same input, return the same answer every time. If the answer should change, it is because the underlying data has changed (or, extremely rarely, because somebody figured out a new way of calculating it.) It would not be because someone external to the company has figured out a way to game the system. This means that we can use references to Wolfram Alpha as shorthand – enter "budget surplus" in Wolfram Alpha, and the results will stare you in the face. In the sense that math is a language for expressing certain concepts in a very terse and precise language, Wolfram Alpha seeds will, I think, emerge as a notation for referring to factual information.

A short detour into graffiti

Back in the early-to-mid-90s, Apple launched one of the first pen-based PDAs, the Apple Newton. The Newton was, for its time, an amazing technology, but for once Apple screwed it up, largely because they tried to make the device do too much. One important issue was the handwriting recognition software – it would let you write in your own handwriting, and then try to interpret it. I am a physician’s son, and I certainly took after my father in the handwriting department. Newton could not make sense of my scribbles, even if I tried to behave, and, given that handwriting recognition is hard, it took a long time doing it. I bought one, and then sent it back. Then the Palm Pilot came, and became the device to get.

The Palm Pilot did not recognize handwriting – it demanded that you, the user, wrote to it in a sign language called Graffiti, which recognized individual characters. Most of the characters resembled the regular characters enough that you could guess what they were, for the others you either had to consult a small plastic card or experiment. The feedback was rapid, to experimenting usually worked well, and pretty soon you had learned – or, rather, your hand had learned – to enter the Graffiti characters rapidly and accurately.

Wolfram Alpha works in the same way as Graffiti did: As Steven Wolfram says in his talk at the Berkman Center, people start out writing natural language but pretty quickly trim it down to just the key concepts (a process known in search technology circles as "anti-phrasing".) In other words, by dint of patience and experimentation, we (or, at least, some of us) will learn to write queries in a notation that Wolfram Alpha understands, much like our hands learned Graffiti.

From links to seeds to semantics

Semantics is really about symbols and shorthand – a word is created as shorthand for a more complicated concept by a process of internalization. When learning a language, rapid feedback helps (which is why I th
ink it is easier to learn a language with a strict and terse grammar rather than a permissive one), simplicity helps, and a structure and culture that allows for creating new words by relying on shared context and intuitive combinations (see this great video with Stephen Fry and Jonathan Ross on language creation for some great examples.)

And this is what we need to do – gather around Wolfram Alpha and figure out the best way of interacting with the system -and then conduct "what if" analysis of what happens if we change the input just a little. To a certain extent, it is happening already, starting with people finding Easter Eggs – little jokes developers leave in programs for users to find. Pretty soon we will start figuring out the notation, and you will see web pages use Wolfram Alpha queries first as references, then as modules, then as dynamic elements.

It is sort of quirky when humans start to exchange query seeds (or search terms, if you will).  It gets downright interesting when computers start doing it. It would also be part of an ongoing evolution of gradually increasing meaningfulness of computer messaging.

When computers – or, if you will, programs – needed to exchange information in the early days, they did it in a machine-efficient manner – information was passed using shared memory addresses, hexadecimal codes, assembler instructions and other terse and efficient, but humanly unreadable encoding schemes. Sometime in the early 80s, computers were getting powerful enough that the exchanges gradually could be done in human-readable format – the SMTP protocol, for instance, a standard for exchanging email, could be read and even hand-built by humans (as I remember doing in 1985, to send email outside the company network I was on.) The world wide web, conceived in the early 90s and live to a wider audience in 1994, had at its core an addressing system – the URL – which could be used as a general way of conversing between computers, no matter what their operating system or languages. (To the technology purists out there – yes, WWW relies on a whole slew of other standards as well, but I am trying to make a point here) It was rather inefficient from a machine communication perspective, but very flexible and easy to understand for developers and users alike. Over time, it has been refined from pure exchange of information to the sophisticated exchanges needed to make sure it really is you when you log into your online bank – essentially by increasing the sophistication of the HTML markup language towards standards such as XML, where you can send over not just instructions and data but also definitions and metadata.

The much-discussed semantic web is the natural continuation of this evolution – programming further and further away from the metal, if you will. Human requests for information from each other are imprecise but rely on shared understanding of what is going on, ability to interpret results in context, and a willingness to use many clues and requests for clarification to arrive at a desired result. Observe two humans interacting over the telephone – they can have deep and rich discussions, but as soon as the conversation involves computers, they default to slow and simple communication protocols: Spelling words out (sometimes using the international phonetic alphabet), going back and forth about where to apply mouse clicks and keystrokes, double-checking to avoid mistakes. We just aren’t that good at communicating as computers – but can the computers eventually get good enough to communicate with us?

I think the solution lies in mutual adaptation, and the exchange of references to data and information in other terms than direct document addresses may just be the key to achieving that. Increases in performance and functionality of computers have always progressed in a punctuated equilibrium fashion, alternating between integrated and modular architectures. The first mainframes were integrated with simple terminal interfaces, which gave way to client-server architectures (exchanging SQL requests), which gave way to highly modular TCP/IP-based architectures (exchanging URLs), which may give way to mainframe-like semi-integrated data centers. I think those data centers will exchange information at a higher semantic level than any of the others – and Wolfram Alpha, with its terse but precise query structure may just be the way to get there.

Notes from Stephen Wolfram webcast

These are my raw notes from the session with Stephen Wolfram on the pre-launch of the Wolfram Alpha service at the Berkman center. Unfortunately, I was on a really bad Internet connection and only got the sound, and missed the first 20 minutes or so running around trying to find something better.

Notes from Stephen Wolfram on Alpha debut

…discussion of queries:
– nutrition in a slice of cheddar
– height of Mount Everest divided by length of Golden Gate bridge
– what’s the next item in this sequence
– type in a random number, see what it knows about it
– "next total solar eclipse"

What is the technology?
– computes things, it is harder to find answers on the web the more specifically you ask
– instead, we try to compute using all kinds of formulas and models created from science and package it so that we can walk up to a web site and have it provide the answer

– four pieces of technology:
— data curation, trillions of pieces of curated data, free/licensed, feeds, verify and clean this (curate), built industrial data curation line, much of it requires human domain expertise, but you need curated data
— algorithms: methods and models, expressed in Mathematica, there is a finite number of methods and models, but it is a large number…. now 5-6 million lines of math code
— linguistic analysis to understand input, no manual or documentation, have to interpret natural language. This is a little bit different from trad NL processing. working with more limited set of symbols and words. Many new methods, has turned out that ambiguity is not such a bit problem once we have mapped it onto a symbolic representation
— ability to automate presentation of things. What do you show people so they can cognitively grasp what you are, requires computational esthetics, domain knowledge.

Will run on 10k CPUs, using Grid Mathematica.
90% of the shelves in a typical reference library we have a decent start on
provide something authoritative and then give references to something upstream that is
know about ranges of values for things, can deal with that
try to give footnotes as best we can

Q: how do you deal with keeping data current
– many people have data and want to make it available
– mechanism to contribute data and mechanism for us to audit it

first instance is for humans to interact with it
there will be a variance of APIs,
intention to have a personalizable version of Alpha
metadata standards: when we open up our data repository mechanism, wn we use that can make data available

Questions from audience:

Differences of opinion in science?
– we try to give a footnote
– Most people are not exposed to science and engineering, you can do this without being a scientist

How much will you charge for this?
– website will be free
– corporate sponsors will be there as well, in sidebars
– we will know what kind of questions people ask, how can we ingest vendor information and make it available, need a wall of auditing
– professional version, subscription service

Can you combine databases, for instance to compute total mass of people in England?
– probably not automatically…
– can derive it
– "mass of people in England"
– we are working on the splat page, what happens when it doesn’t know, tries to break the query down into manageable parts
300th largest country in Europe? – answers "no known countries"

Data sources? Population of Internet users. how do you choose?
– identifying good sources is a key problem
– we try do it well, use experts, compare
– US government typically does a really good job
– we provide source information
– have personally been on the phone with many experts, is the data knowable?
– "based on available mortality data" or something

Technology focus in the future, aside from data curation?
– all of them need to be pushed forward
– more, better, faster of what we have, deeper into the data
– being able to deal with longer and more complicated linguistics
– being able to take pseudocode
– being able to take raw data or image input
– it takes me 5-10 years to understand what the next step is in a project…

How do you see this in contrast with semantic web?
– if the semantic web had been there, this would be much easier
– most of our data is not from the web, but from databases
– within Wolfram Alpha we have a symbolic ontology, didn’t create this as top down, mostly bottom-up from domains, merged them together when we realized similarities
– would like to do some semantic web things, expose our ontological mechanisms

At what point can we look at the formal specs for these ontologies?
– good news: All in symbolic mathematical code
– injecting new knowledge is complicated – nl is surprisingly messy, such as new terms coming in, for instance putting in people and there is this guy called "50 cent"
– exposure of ontology will happen
– the more words you need to describe the question, the harder it is
– there are holes in the data, hope that people will be motivated to fill them in

Social network? Communities?
– interesting, don’t know yet

How about more popular knowledge?
– who is the tallest of Britney Spears and 50 cent
– popular knowledge is more shallowly computable than scientific information
– linguistic horrors, book names and such, much of it clashes
– will need some popularity index, use Wikipedia a lot, can determine whether a person is important or not

The meaning of life? 42….

Integration with CYC?
– CYC is most advanced common sense reasoning system
– CYC takes what they reason about things and make it computing strengths
– human reasoning not that good when it comes to physics, more like Newton and using math

Will you provide the code?
– in Mathematica, code tends to be succinct enough that you can read it
– state of the art of synthesizing human-readable theorems is not that good yet
– humans are less efficient than automated and quantitative qa methods
– in many cases you can just ask it for the formula
– our pride lies in the integration, not in the models, for they come from the world
– "show formula"

Will this be integrated into Mathematica?
– future version will have a special mode, linguistic analysis, pop it to the server, can use the computation

How much more work on the natural language side?
– we don’t know
– pretty good at removing linguistic fluff, have to be careful
– when you look at people interacting with the system, but pretty soon they get lazy, only type in the things they need to know
– word order irrelevant, queries get pared down, we see deep structure of language
– but we don’t know how much further we need to go

How does this change the landscape of public access to knowledge?
– proprietary databases: challenge is make the right kind of deal
– we have been pretty successful
– we can convince them to make it casually available, but we would have to be careful that the whole thing can’t be lifted out
– we have yet to learn all the issues here

– have been pleasantly surprised by the extent to which people have given access
– there is a lot of genuinely good public data out there

This is a proprietary system – how do you feel about a wiki solution outcompeting you?
– that would be great, but
– making this thing is not easy, many parts, not just shovel in a lot of data
– Wikipedia is fantastic, but it has gone in particular directions. If you are looking for systematic data, properties of chemicals, for instance, over the course of the next two years, they get modified and there is not consistency left
– the most useful thing about Wikipedia is the folk knowledge you get there, what are things called, what is popular
– have thought about how to franchise out, it is not that easy
– by the way, it is free anyway…
– will we be inundated by new data? Encouraged by good automated curation pipelines. I like to believe that an ecosystem will develop, we can scale up.
– if you want this to work well, you can’t have 10K people feeding things in, you need central leadership

Interesting queries?
– "map of the cat" (this is what I call artificial stupidity)
– does not know anatomy yet
– how realtime is stock data? One minute delayed, some limitations
– there will be many novelty queries, but after that dies down, we are left with people who will want to use this every day

How will you feel if Google presents your results as part of their results?
– there are synergies
– we are generating things on the fly, this is not exposable to search engines
– one way to do it could be to prescan the search stream and see if wolfram alpha can have a chance to answer this

Role for academia?
– academia no longer accumulates data, useful for the world, but not for the university
– it is a shame that this has been seen as less academically respectable
– when chemistry was young, people went out and looked at every possible molecule
– this is much to computer complicated for the typical libraries
– historical antecedents may be Leibniz’ mechanical and computational calculators, he had the idea, but 300 years too early

When do we go live?
… a few weeks
– maybe a webcast if we dare…

Steroids for the flighty-minded

An excellent and truly scary article by Margaret Talbot in the New Yorker about the use of neuroenhancers by people who are not ill. Which is comparable to recreational plastic surgery, which I don’t like either.

Is it just me, or is cheating seen as more and more normal and not to be punished or even held in contempt? When I catch students plagiarizing (which happens with a depressing frequency, partly because the tools for doing so have gotten so much better) their defense is more and more that this is normal, that you cannot expect them to come up with something original when everything is available out there on Google and Wikipedia. My retort is that I need to judge them on their own work, not others’, and that they therefore need to make it clear to me what they have done themselves and what they have found somewhere else. And their answer is that they put "Source: Wikipedia" at the bottom and therefore they are scot free, so there.

I would get angry if this wasn’t so depressing and so pointless. I am tempted to just fail them. Not for plagiarism – which entails disciplinary committees and all sorts of make-work. Rather an F for outright stupidity.

It is some consolation that creativity is one area where neuroenhancers don’t seem to work. But they might, as the article finds,  help these modern-day multitaskers concentrate on one specific task (hoping that it is a productive one and not, say, obsessively alphabetizing your library.) But neuroenhancers won’t make your ideas better – they won’t assist in spotting the prey, only in bringing it home. In the most dreary way possible:

Every era, it seems, has its own defining drug. Neuroenhancers are perfectly suited for the anxiety of white-collar competition in a floundering economy. And they have a synergistic relationship with our multiplying digital technologies: the more gadgets we own, the more distracted we become, and the more we need help in order to focus. The experience that neuroenhancement offers is not, for the most part, about opening the doors of perception, or about breaking the bonds of the self, or about experiencing a surge of genius. It’s about squeezing out an extra few hours to finish those sales figures when you’d really rather collapse into bed; getting a B instead of a B-minus on the final exam in a lecture class where you spent half your time texting; cramming for the G.R.E.s at night, because the information-industry job you got after college turned out to be deadening. Neuroenhancers don’t offer freedom. Rather, they facilitate a pinched, unromantic, grindingly efficient form of productivity.

If you find that tempting, be my guest. I am sure you can find directions via Google.

Jon Udell on observable work

Jon Udell has a great presentation over at Slideshare on how to work in observable spaces – something that should be done, to a much larger extent, by academics. I quite agree (and really need to get better at this myself):

What if you could remember everything?

I was delighted when I found this video, where James May (the cerebral third of Top Gear) talks to professor Alan Smeaton of Dublin City University about lifelogging – the recording of everything that happens to a person over a period of time, coupled with the construction of tools for making sense of the data.

In this example, James May wears a Sensecam for three days. The camera records everything he does (well, not everything, I assume – if you want privacy, you can always stick it inside your sweater) by taking a picture every 30 seconds, or when something (temperature, IR rays in front (indicating a person) or GPS location) changes. As it is said in the video, some people have been wearing these cameras for years – in fact, one of my pals from the iAD project, Cathal Gurrin, has worn one for at least three years. (He wore it the first time we met, where it snapped a picture of me with my hand outstretched.)

The software demonstrated in the video groups the pictures into events, by comparing the pictures to each other. Of course, many of the pictures can be discarded in the interest of brevity – for instance, for anyone working in an office and driving to work, many of the pictures will be of two hands on a keyboard or a steering wheel, and can be discarded. But the rest remains, and with powerful computers you can spin through your day and see what you did on a certain date.

And here is the thing: This means that you will increasingly have the option of never forgetting anything again. You know how it is – you may have forgotten everything about some event, and then something – a smell, a movement, a particular color – makes you remember by triggering whatever part (or, more precisely, which strands of your intracranial network) of your brain this particular memory is stored. Memory is associative, meaning that if we have a few clues, we can access whatever is in there, even though it had been forgotten.

Now, a set of pictures taken at 30-second intervals, coupled together in an easy-to-use and powerful interface, that is a rather powerful aide-de-memoire.

Forgetting, however, is done for a purpose – to allow you to concentrate on what you are doing rather than using spare brain cycles in constant upkeep of enormous, but unimportant memories. For this system to be effective, I assume it would need to be helpful in forgetting as well as remembering – and since it would be stored, you would actually not have to expend so much remember things – given a decent interface, you could always look it up again, much as we look things up in a notebook.

Think about that – remembering everything – or, at least being able to recall it at will. Useful – or an unnecessary distraction?

Shirky on newspapers

Clay Shirky, the foremost essayist on the Internet and its boisterous intrusion into everything, has done it again: Written an essay on something already thoroughly discussed with a new and fresh perspective. This time, it is on the demise of newspapers – the short message is that this is a revolution, and saving newspapers just isn’t going to happen, because this is, well, a revolution:

[..]I remember Thompson [in 1993] saying something to the effect of “When a 14 year old kid can blow up your business in his spare time, not because he hates you but because he loves you, then you got a problem.” I think about that conversation a lot these days.


Revolutions create a curious inversion of perception. In ordinary times, people who do no more than describe the world around them are seen as pragmatists, while those who imagine fabulous alternative futures are viewed as radicals. The last couple of decades haven’t been ordinary, however. Inside the papers, the pragmatists were the ones simply looking out the window and noticing that the real world was increasingly resembling the unthinkable scenario. These people were treated as if they were barking mad. Meanwhile the people spinning visions of popular walled gardens and enthusiastic micropayment adoption, visions unsupported by reality, were regarded not as charlatans but saviors.


That is what real revolutions are like. The old stuff gets broken faster than the new stuff is put in its place. The importance of any given experiment isn’t apparent at the moment it appears; big changes stall, small changes spread. Even the revolutionaries can’t predict what will happen. Agreements on all sides that core institutions must be protected are rendered meaningless by the very people doing the agreeing. (Luther and the Church both insisted, for years, that whatever else happened, no one was talking about a schism.) Ancient social bargains, once disrupted, can neither be mended nor quickly replaced, since any such bargain takes decades to solidify.

And so it is today. When someone demands to know how we are going to replace newspapers, they are really demanding to be told that we are not living through a revolution. They are demanding to be told that old systems won’t break before new systems are in place. They are demanding to be told that ancient social bargains aren’t in peril, that core institutions will be spared, that new methods of spreading information will improve previous practice rather than upending it. They are demanding to be lied to.

That simple. He draws the line back to the Gutenberg printing press and the enormous transition that caused – much more chaotic that you would think with 500 year hindsight.

Highly recommended. And another piece of reading for my suffering students….

The perils of openness

Mary Beard has a really interesting perspective on the consequences of openness: Transparency is the new opacity. In the absence of confidential channels (which, given today’s storage and search capabilities, you have no guarantee will remain confidential) very little actual information gets transmitted in student appraisals.

And the only difference between job appraisals and student appraisals, I assume, lies in vocabulary. As a technologist, I could envision all kinds of technical fixes to this, assuming that those in charge of the specifications acknowledge that they are necessary: Fields for comments hidden from the subject, fields that terminate after a certain time after reading, filters to search engines that handle confidentiality – including the fact that there is a confidential comment in the first place (which turns out to be surprisingly hard to do.)

But the more natural fix is the quick conversation in the pub, the hallway, or on the private cell phone – impervious to search, storage and documentation – where the real information can be exchanged. The electronic equivalent? Encrypted Twitter, perhaps, if such a thing exists.

What we need is online coffee shops, offering the same discreet, transient and history-less marketplace for information. Now I spend time on the phone with my colleagues for that, but that doesn’t work well across time zones. So – what would it look like and how to build it?

PS: Come to think of it, Skype is encrypted, at least the phone calls.

Shannon, explained…

Peter Cochrane has a simple and very useful explanation of Claude Shannon’s mathematical law of communication, complete with diagrams. And a warning that, when it comes to technology, magic won’t work there, either.

We might thus imagine the energy of a signal dispersed inside such a solid form in the same way that water is retained by the skin of a balloon. We can change the shape of the balloon but the amount of water stays the same. Similarly, different coding and modulation schemes can alter the ratios of the sides presented by Shannon’s equation.

We can certainly trade off signal power against noise and/or bandwidth and time, but we can never exceed the bounds set by nature.

Seeking PhD candidates for iAD project

(Note: This is not the official announcement, which you can find here, where you will also find a link to the application program. I post this here because this blog is easier to update, allows me to link to pertinent information more easily, allows pictures, and allows comments and questions.)



Announcement: Available Ph. D. Scholarships in Technology Strategy


BI Norwegian School of Management is inviting applications for scholarships in technology strategy. The scholarships are made available through the iAD Center for Research-based Innovation, an eight-year research project funded by the Norwegian Research Council and hosted by FAST Search and Transfer, a Microsoft Company. The candidates will pursue their Ph. D. through the doctoral programs of BI Norwegian School of Management and do their thesis research on topics of interest to the iAD project.

Continue reading

Clayton Christensen on health care disruption

Here is Clayton Christensen giving a talk on disruptions in health care (but really a good introduction on disruption in general) at MIT:


Note that Clay uses Øystein Fjeldstad’s Value Configurations framework a little before 1:00:00 – a result of many conversations aboard the "Disruptive Cruise" which I arranged last year…. don’t say we aren’t doing our part over here….

Liveblogging from Sophia Antipolis

This are my running notes from visiting Accenture’s Technology Labs in Sophia Antipolis, as part of a Master of Management program called "Strategic Business Development and Innovation" for the Norwegian School of Management.

Accenture’s Technology Labs is a relatively small organization: 200 researchers, 180000 employees in Accenture. There are four tech labs: Silicon Valley, Chicago (the largest), Sophia Antipolis, Bangalore, they should be able to do everything, but in practice there is specialization. The four main activities of the tech labs are technology visioning, research, development of specific platforms, and innovation workshops (with clients, press, consultants etc.) The themes pursued are mobility and sensors; analytics and insight; human interaction & performance; Systems Integration (architecture, development methods); and infrastructure (virtualization, cloud computing).

Kelly Dempski: Power Shift: Accenture Technology vision

The visioning used to be far-thinking, visionary etc., now have a much more immediate focus, want to look at things that you can implement today, make it much more "grounded in reality"

Eight critical trends:

  • 1: Cloud computing and SaaS: Hardware cloud (, IBM, Google (now the third largest producer of servers in the world)), desktop cloud (Google, Zimbra, MS Office Live Workspace), SaaS cloud (Netsuite, CrownPeak,, and services cloud (Google Checkout, Amazon web services, eBay, Yahoo)
    • examples: Flextronics has changed over their HR applications to an SaaS model. AMD emulates chips on software for testing purposes, now contract with Sun to do that in the cloud. New York Times had 4Tb of articles that they wanted to translate to PDF: Translated it all twice (because there was a bug the first time), someone went on Amazon with their credit card, uploaded 4Tb, processed it (24h), there was a bug, had to do it again, 48h, total cost $250 on someone’s credit card.
    • issues:
      • data location (where is the data)
      • privacy and security
      • performance
  • 2: Systems – regular and lite
    • SOA as the integration paradigm (regular), mashups (lite)
    • traditional back-end apps vs. end-user apps
    • small number of apps maintained by CIOs vs. large number of User and user-group created applications (long tail)
    • examples:
      • REST is a light architectural approach for interoperability & data extraction
      • Mashups (JackMe (trading platform tools), Serena, Duet (SAP and Microsoft), IBM) becoming more important in the enterprise arena
      • Widgets and gadgets are light-weight desktop UIs that continually update some data
  • 3: Enterprise intelligence at scale
    • combination of internet-scale computing, petabytes of data, and new algorithms
    • almost all the large systems vendors have partnered with or acquired some analytics oriented software company (such as Microsoft acquiring FAST)
    • rampant use of data: evolution through access, reporting, external & internal, unstructured etc.
  • Trends 1-2-3 together: The new CIO
    • hardware and software procured from the cloud
    • business units, end-users create their own lightweight apps
    • The new CIO:
      • "Data Fort Commander" – ensure security, privacy, integrity of corporate data and manage back-end apps
      • "Chief Intelligence Officer" – provide data analysis services & insights to business units
  • 4: Continuous access
    • mobile device "first class" IT object
    • No concept of enterprise desktop/laptop
    • location-based services
  • 5: Social computing
    • amplify and support the value of the community
    • three major directions: Platformization, inter-operability, identity management
  • 6: User-generated content
    • community-based CRM (users making videos about how to run certain kinds of software or build something from IKEA)
    • new forms of entertainment
    • revenue erosion of traditional media companies
    • this has marketing implications: You can measure the sentiment out there in the user community. You switch from advertising to engaging.
  • 7: Industrialization of software development
    • converging trends will increase integration: Predictive metrics, model-driven development, domain-specific languages, service-oriented architecture, agile-development & Forever Beta.
  • 8: Green computing
    • global warming, energy prices, consumer pressure, compliance and valuation
    • switch out energy-intensive processes for information-intensive processes: Electronic collaboration; Warehousing, supply chain & logistics optimization; Smart factories, plants, buildings & homes; and new businesses such as carbon auditing and trading

Cyrille Bataller: Biometric Identity Management

Biometric identification is coming, driven by increasing demand and technological progress. Biometric identification is defined as "automated recognition of individuals based on their physiological and/or behavioral characteristics. Physiological can be face, iris, fingerprint; behavioral can be signature, voice, or walk. Involves a tradeoff, as with all security systems, between the level of security and the convenience of the system. Fingerprint is most used (38%), face is the most natural, iris the most accurate. Many others: Finger/hand vein, gait, ear shape, electricity, heat signature, hand geometry and so on…

Balance between FMR (false (positive identification) m rate) and FNMR, called equal error rate. Iris has an EER of .002%, 10 fingerprints .01%, fingerprint .4%, signature 3%, face recognition 6%, voice 8%. Many parameters in addition to this.

Securimetrix has something called HIIDE, a mobile unit that does a number of biometrics, used in Iran. Voice is very interesting because it can be done over the phone, interesting for call centers, banks etc. Multimodal important, because it is hard to spoof.

Airports is a good example of what you can do with proper identification: You can move 99.9% of the check-in away from the airport. Bag drop can also be almost fully automated. Portugal is the leader in the EU, have automated passport control with facial recognition (scan, use electronic passport etc.). Most people are not concerned very much with privacy given some assurance and convenience. Likely to see lost of automated border clearance for the masses, but also registered travelers that go through even quicker and are interoperable across many airports. One common misunderstanding is that automated identity checking is moving away from 100% accuracy, but human passport/security control is an error-ridden process and mostly automated processes are more accurate.

Antoine Caner: Next Generation Branch

This is a showcase exhibit of best practice banking technology and processes. This showroom has about 40 companies (banks, mostly) visits per year.

Most banks have a multi-channel strategy, have returned from a strategy of getting rid of branches but want to redefine it. Rather than doing low-value transactions, the branches are seen as a mesh network for business development.

Key principles behind the branch of the future:

  • generating and taking advantage of the traffic
  • flexibility throughout the day
  • adaptation to client’s value
  • sell & service oriented
  • modular space according
  • entertaining and attractive
  • focused on customer experience


  • turning the branch windows into an interactive display (realty, for instance)
  • Bluetooth-enabled push information
  • swipe card at entrance to let branch know you are there, let your account manager know, apply Amazon-like features
  • digital displays for marketing
  • avatar-based teller services
  • biometric-based ATMs to allow for more advanced transactions, as well as more opportunistic sales applications
  • do both identification and authentication
  • digital pen user interface for capturing data from forms
  • RFID-based or NFC (Near Field Communication) in brochures, swipe and get info on screen
  • "interactive wall" for interaction with clients in information seeking mode
  • visual tracking of movement in the branch
  • modular office that can change shape during the day, reconfigurable furniture

What impressed me was not the individual applications per se – though they were impressive – but way everything had been put together, with a back-office application that can be used by the branch manager to track how this whole customer interface  (i.e., the whole bank branch) works.

Alexandre Naressi: Emerging Web Technologies

Alexandre leads the rich Internet applications community of interest within Accenture. He started off giving some background on Web 2.0 and used Flickr as an example of a Web 2.0 application, where a company use user-generated content and tagging to get network effects on their side. Important here is not only the user interface but also having APIs that allow anyone to create applications and to have your content or services embedded into other platforms. Dimpls is another example. More than one billion people have Internet access, 50% of the world has broadband access, which allows for richer applications. Customers’ behavior is changing – it is now a "read-write" web. It has also gotten so much cheaper to launch something: Excite cost $3m, JotSpot $200k, Digg cost $200.

Rich Internet Application and Social Software represent low-hanging fruit in this scenario. RIA allows the functionality of a fat client in a browser interface, with very rich and capable components for programmmers to play around with.

Two families of technologies: Jacascript/Ajax (doesn’t require a plugin, advocated by Google), and three different plugin-based platforms: Silverlight (Microsoft), Flash/Flex from Adobe, and JavaFX from Sun. All of them have offline clients that can be downloaded as well. A good example is, which gives a better user interface – Accenture has developed something similar for their internal enterprisesearch.

Social Software: Accenture has its own internal version of Facebook. Youtube is also a possible corporate platform where people can contribute screencasts of all kinds of interesting demos and prototypes.

Kirsti Kierulf: Nordic Innovation Model for Accenture and Microsoft

Accenture and Microsoft collaborating (own a company, Avanade, together), and have set up an Innovation lab in Oslo called the Accenture Innovation Lab on Microsoft Enterprise Search. Three agendas: Network services, enterprise search (iAD), and service innovation. Running a number of innovation processes internally. This happens on a Nordic level, so collaboration is with academic institutions and companies all over.

Have made a number of tools to support innovation methodologies: InnovateIT, InnovoteIT, and InnomindIT (mind maps), as well as a method for making quick prototypes of systems and concepts for testing and experimentation: 6 weeks from idea to test.

Current innovation models are not working for long-term, risky projects. Closed models do not work – hence, looser, more informal and open innovation models with shorter innovation cycles. Pull people in, share costs throughout the network, Try to avoid the funnel which closes down projects with no clear business case and NIH. Try to park ideas rather than kill them.

Important: Ask for advice, stay in the question, maintain relationships, don’t spend time on legalities and financials.

Practicing what you preach (The business school edition)

I am a board member of (prior description here) a startup company that offers a recruitment service for universities, primarily those offering master programs in business or related fields. The company now has a number of universities and business schools signed up, and we have begun to  learn something about the market that we (or, at least, I) did not know before, even though I have worked for a large, private business school for many years.

The thing is, it seems (many) business schools do not practice what they preach – i.e., many of them fail to apply some rather basic strategies, sales practices and web practices. Here are a few observations I have made so far:

1. Business schools say they differentiate, but they don’t

The classic. Porteresque view of competitive strategy says that there are only three generic strategies you can apply: Cost leadership, differentiation (i.e., being unique in some way), or segmentation (i.e., addressing specific sub-markets based on attributes of the customer). The latter, of course, is merely a more granular and partially combined version of the two first ones. Even though business schools should know competitive strategy well (it is, after all, one of their most important subjects), most of them do not pursue any one of these strategies. Or, rather, they say that they pursue a differentiation strategy but don’t. In that sense, they are neither strategic nor different.

The test for whether you have a strategy that truly is strategic is that you have chosen not to do something that you could have done. The test for whether you are differentiated is whether you can take away the school name (and things the school cannot change easily, such as nationality and location) from its description and then see whether you can determine which school it is based on its marketing material.

The reason I say this is that I have played around with the course and school descriptions found in our database, and been struck by how similar they look. Do the test yourself – go into the Masterstudies database, look up a few schools, and ask yourself: Which student segments are this school deliberately not trying to get – and what part of its offering is sufficiently different that you can see to what extent they are doing this in practice or just in Powerpoint.

Most of them are looking for the future leaders who see the challenges of  globalization, new technology and a constantly changing marketplace as opportunity to employ innovative strategies to build flexible learning companies that create value for their customers, shareholders and employees while displaying a sense of diversity and social responsibility. Hmmmm…. I wonder how large that segment really is – and to what extent the school really can serve this mythological student once he or she shows up?

The net result, of course, is a power law distribution of interest, with about 10 schools, the Harvards and Stanfords and MITs and Kelloggs of the world, getting all the attention; a near-first tier that is deadly afraid of doing something that the best schools do not, lest they be criticized for it; and a medium body and eventually long tail of schools that really do some differentiation but dare not talk about what it really consists of – for instance, explicitly targeting those who did not make it into the first tier schools but still are good students.

2. Business schools talk about market analysis, but many don’t do it well, or at all

Recruiting a student of sufficient quality and interest is a complicated process: You have to create enough awareness so you get enough applications so you can send out enough qualified to get enough accepts. To do this, you have to track

a) the number of leads (expressions of interests) you get

b) how many who actually send in an application (conversion rate)

c) how many of these are qualified and will get an offer (acceptance rate)

d) how many who accept the offer when they get it (yield)

My thoughts were that every Dean of Admission in the world eagerly tracked these numbers (they are, after all, also pretty good for measuring the level of effort of the sales team) but no, there are some indications that a number of schools, in fact, do not even know them. Depending on where in the distribution of schools you are, you ought to track different numbers: If you are top-ten, you track yield rate; if you are new, you track earlier in the process. (Incidentally, what Masterstudies offer is a filtered version of the first one, where schools can set up criteria for what kind of leads they want, thereby filtering out the clearly unqualified and enable some geographical or gender balancing – the difference between carpet-bombing and surgical strikes, as it were).

These numbers are not hard to get, but fewer schools than I thought actively manage them. (Not that I have formal statistics or would share them if I had them.).

Schools differ widely in their attitude to prospective students as well. We tested the response rate of schools and found that it varied widely – some schools did not respond at all, whereas some schools were on the phone with our prospective students in less than 30 minutes. That makes a difference as to whether the students will send in an application or not. (And no, there was no quality difference between these schools in terms of rankings and so on – we could not detect any pattern at all.)

3. Business schools know little about why they don’t get the students they want

There is a classical study by Abraham Wald of the location of bullet holes in bomb planes to find out where to add armor. Wald looked at where the airplanes were shot up and then concluded (not in the referenced paper, though) that more armor should be in the places with no bullet holes. The reason was simple enough: The planes that returned were the survivors, with bullet holes in places that could take the damage.

I wonder if not some of the same bias comes through with business schools. I wonder how many of them systematically interview or otherwise try to elicit responses from those students that did not chose their offering – at any point in the process. There are some schools that are clever – for instance, one school makes sure that the lack of a GMAT is not a hindrance to start studying if your grades and other academic performance indicators are good, by allowing the student to start and having time and resources set aside for GMAT certification. But I wonder how many take the time to find out whether it is lack of awareness, interest, timing, geography, content, structure, reputation or finance that makes promising student choose a different school. For those that interview candidates, I assume some of this comes out in discussions, but I have my doubts as to how well defined and executed these processes are.

I also have a sneaking suspicion that many students choose a business school for more mundane reasons that they tell school officials. It sounds better to say you chose this particular program because you like its specific focus on subjects or teaching philosophy (differentiation again – see point 1 above) than because the school is conveniently situated or your friends go there.

4. Business Schools make their web sites viewable, but not findable

Findability refers to the degree to which your web site or specific page can be found  by a search engine. As search technology more and more becomes the preferred interface to information, having a findable web site becomes very important. But more and more schools are finding that when you Google their specific master programs, the description of the the program found at comes up higher than their own description.

This is because schools spend a lot of money creating nice-looking web sites, but not much on making them findable. I think this is because the school has an understanding of how to create nice exclusive-looking brochures, but don’t know much about search engine optimization. The visual design of a site is outsourced to an ad agency, and the maintenance of it done through some content management package that does not use descriptive directory and file names, instead hiding new and interesting programs behind cryptic URLs. Perhaps each business school thinks their brand equity is so strong that students will know about them and come in the front door (i.e., the home page) like they used to do 10 years ago?

I have always wondered why business school web sites are done in such an overadministered and cumbersome fashion – for instance, few business schools set up ways for faculty to have blogs, instead requiring them to have official-looking web pages that are pain to maintain and leaving blogging to those who either set up their own blog outside school premises or have the technical gumption and political power to install the software on school infrastructure themselves. There are simple and cheap solutions around – Drupal, for instance – that allow descriptive directories and simple, shared content management. And when it comes to content – why not just use Movable Type or WordPress and the underlying software for faculty and other writers? In that way, the content would be plugged into what is beginning to look like the Semantic Web almost by default.


Given that I am on the Board of a company that tries to help business schools recruit internationally, I personally think this is just swell: Lots of business extension possibilities for us. But there is low-hanging fruit here: Simple, effective strategies and practices that business schools ought to execute on, with or without our help.

As marketing and recruiting increasingly goes on line (and, after that, into communities such as LinkedIn and Facebook), business schools will have to understand and change their marketing to make themselves much more differentiated and findable. In the meantime, there is room for first (or, rather, fast second) movers.

May your school be one of them.

Tim O’Reilly nails it on cloud computing

In this long and very interesting post, Tim O’Reilly divides cloud computing into three types: Utility, platform and end-user applications, and underscores that network effects rather than cost advantages will be what drives economics in this area. (This in contrast to the Economist’s piece this week, which places relatively little emphasis on this, instead talking about the simplification of corporate data centers – though the Economist piece is focused on corporate IT.)

Network effects happen when having new users on a platform or service are a benefit to the other users. This benefit can come from platform integration – for instance, if we both share the same service we can do things within that service that may not be possible between services, due to differences in implementation or lack of translating standards.

Another benefit comes when the shared service can leverage individual users’ activities. Google’s Gmail, for instance, has a wonderful spam filter, which is very reliable because it tracks millions of users’ selections on what is spam and what isn’t.

Tim focuses on the network effects of developers, which is an important reason why Microsoft, not Apple, won the microcomputer war. When Steve Ballmer jumped around shouting "developers, developers, developers", he was demonstrating a sound understanding of what made his business take off – and was willing to make a fool of himself to prove it.

Tim also invokes Clay Christensen’s "law of conservation of attractive profits", arguing that as software becomes commoditized, opportunities for profits will spring up in adjacent markets. In other words, someone (Jeff Bezos? Larry and Sergei?) need to start jumping up and down, shouting "mashupers, mashupers, mashupers" or perhaps "interactors, interactors, interactors" and, more importantly, provide a business model for those that build value-added services on top of the widely shared platforms and easily available applications they provide.

One way to do that could be to make available some of the data generated by user activities, which today most of these companies keep closely to themselves.  That will require balancing on a sharp edge between providing useful data, taking care of user privacy, and not giving away your profitability too much. As my colleague Øystein Fjeldstad and I wrote in an article a few years ago – the companies playing in this field will have to make some hard decisions between growing the pie and securing the biggest piece for themselves.

If we cannot harness network effects, cloud computing becomes a cost play, and after awhile about as interesting, in terms of technical evolution, as utilities are now. USA is behind Europe and Asia in mobile phone systems partially because US cellphone companies were late in developing advanced interconnect and roaming agreements, instead trying to herd customers into their own network. Let’s hope the cloud computing companies have learned something from this….

Education and technology – a historic view

Nice review of Claudia Goldin and Lawrence F. Katz’s The Race between Education and Technology which goes into my ever-expanding pile of books to get. Main point: Income inequality decreased in the first half of the 1900s, then, after 1980, increased again. In chapter 8, available in PDF format, is the following conclusion:

Our central conclusion is that when it comes to changes in the wage structure and returns to skill, supply changes are critical, and education changes are by far the most important on the supply side. The fact was true in the early years of our period when the high school movement made Americans educated workers and in the post-World War II decades when high school graduates became college graduates. But the same is also true today when the slowdown in education at various levels is robbing America of the ability to grow strong together.

The maladjusted and marginalized terrorist

Bruce Schneier, security guru extraordinaire, has a cracking good article on what motivates terrorists in Wired: The Seven Habits of Highly Ineffective Terrorists, much of it drawn on a paper by Max Abrahms called What Terrorists Really Want.)

The main argument is that terrorists "turn to terrorism for social solidarity", i.e., that they join terror organizations less for political aims and more because they themselves are alienated and outcasts in search for belonging and, perhaps, as an outlet for violent or authoritarian tendencies. They are loners in search of meaning rather than radicals in search a way to express their political views:

Individual terrorists often have no prior involvement with a group’s political agenda, and often join multiple terrorist groups with incompatible platforms. Individuals who join terrorist groups are frequently not oppressed in any way, and often can’t describe the political goals of their organizations. People who join terrorist groups most often have friends or relatives who are members of the group, and the great majority of terrorist are socially isolated: unmarried young men or widowed women who weren’t working prior to joining. These things are true for members of terrorist groups as diverse as the IRA and al-Qaida.

I think this makes lots of sense. During the late 60s and early 70s there was a vogue in many European countries for politically active youngsters to join the far left – a movement that at the most extreme produced the Bader-Meinhof group in Germany. Here in tiny and peaceful Norway a number of people who later wondered how they got into it joined various versions of marxist-leninist groups with the stated aim of violently overthrowing the state. (A great novel by the author Dag Solstad, later turned into a film, explores these mechanisms, telling the story of a small-town high school teacher who joins the movement because he falls in love with one of the leaders). This caused a number of bookish intellectuals from well-off homes to try to act and talk like "the people" (often with hilarious results) and take menial jobs with a view to start strikes, unrest and eventually, the great revolution.

The movement petered out eventually, due to a lack of examples of marxist-leninist success stories, better career opportunities elsewhere, the demands of family life and, most importantly, the failure of the general populace to join the cause. Today, most of these people (especially the ideological leaders) are found in relatively good positions in society and will not thank you for bringing up this period. (In one ironical twist, one of them is a professor of journalism – an interesting position for someone who once wanted to force the press to serve the needs of the proletarian dictatorship.)

Now, imagine what would have happened if the Norwegian state had declared war on these groups and instituted all kinds of controls in the name of national safety? Suddenly they would have increased in importance, had some legitimate cases of persecution (heavy-handed security always produces incidents) and play off the fear and irritation induced by surveillance and controls.

Instead, the Norwegian government largely ignored them, aside from discreet monitoring for weapons violations and espionage. To the extent that anyone was arrested, the perpetrators were charged with clear violations of current law and given sentences similar to those of anyone else.

The movement did not achieve much: A few strikes, a half-hearted rebellion at a few universities, a radical newspaper that still scrapes by (and occasionally is rather good, especially after they toned down the ideology,) "progressive" clothing fashions, some small groups of old professors with weird research streams, reams and streams of newspaper commentary, and that’s about it.

Now, imagine if the current war against terrorism had been pursued as a large-scale police investigation rather than a war, with terrorists being pulled into regular courts, security controls set up for security rather than show, publicity focused on a general toning down of the whole thing, money spent on improving the situation for various downtrodden groups, and military solutions employed as the absolutely last resort, and then only under the auspices of the UN.

I think al-Qaida would be reduced to a group of fringe Islamist fundamentalists with uncertain political aims, lots of fratricidal infighting (when the populace ignores them, they turn on each other), uncertain career paths and increasingly untenable positions. Which is what they were, until the Western world handed them prominence to the tune of hundreds of billions of dollars.

Bruce would, I think, agree. Here is his conclusion:

We also need to pay more attention to the socially marginalized than to the politically downtrodden, like unassimilated communities in Western countries. We need to support vibrant, benign communities and organizations as alternative ways for potential terrorists to get the social cohesion they need. And finally, we need to minimize collateral damage in our counterterrorism operations, as well as clamping down on bigotry and hate crimes, which just creates more dislocation and social isolation, and the inevitable calls for revenge.