Category Archives: iAD

Double!

Espen-Double

Here (photo: Lene Pettersen) is my last addition to my nerd kit: A Double from Double Robotics. I suppose this is formally defined as a telepresence robot, but a simpler way to describe it is to say that it is an iPad stuck on a Segway.

I spent most of Friday fiddling around with it, exploring what it can do. It is surprisingly natural in use: It can be raised (up to about five feet) and lowered, depending on whether you want to speak to someone standing or sitting. I drove it around the BI building, and quickly found that dead network spots (it needs a constant Internet connection to work) are problematic. Also, it is not very good at switching between routers on the same wireless network – it loses connection and needs a couple of minutes to find it again. I’ll probably have to get an iPad with a 4G connection, if such a thing exists (on the other hand, with a 4G connection I could send it out of the building and down the street.) Another problem was weak sound – in a room with other people speaking, the iPad speaker is too weak. I might have to get some small battery-powered speakers and Velcro them to the kit. elevators, doors and door sills, of course, are tricky.

Here are some pictures from a little excursion around the school library (photo: Martin Uteng, Instagrammed here.). A little tricky to talk to the students (again, not enough volume) and some network issues, but at least I am getting better at driving it:

IMG_2157

IMG_2144

(Yes, this is actually research. And fun at work.) The use cases for a Double are several – I could advise students and go to meetings without leaving my home office, for instance. I have done that with Skype and other video conferencing tools for ages, but this thing is much less formal and allows you to putter around and talk to people. one of my colleagues has severe allergies and spends the spring as a pollen refugee in his mountain cabin – I am sure he would love to borrow it.

Compared to a picture on a computer or projection screen this little robot is much more intuitive and humanoid – you can see in which direction it is looking, for instance. I have been told that there are a bunch of these at Stanford, and that at first they were meant to be shared – but it turns out that people want their own, so they can personalize it as I have done with the ugliest bow tie I could find. My colleagues tell me it feels much more natural to speak to me through the Double than through Skype – it is almost as if I am there.

So, some technicalities left to resolve, but this has promise. I am already scheduled to give a talk through it, while I am in the States. And yes, I have already been compared to Sheldon Cooper of The Big Bang Theory. Several times…

Write, that I may find thee

A Google Dance – when Google changes its rankings of web sites – used to be something that happened infrequently enough that each “dance” had a name – Boston, Fritz and Brandy, for instance – but are now happening more than 500 times per year, with names like Panda #25 and Penguin 2.0, to name a few relatively recent ones. (There is even a Google algorithm change “weather report”, as many of the updates now are unnamed and very frequent.) As a consequence, search engine optimization seems to me to be changing – and funny enough, is less and less about optimization and more and more about origination and creation.

It turns out that Google is now more and more about original content – that means, for instance, that you can no longer boost your web site simply by using Google Translate to create a French or Korean version of your content. Nor can you create lots of stuff that nobody reads – and by nobody, I mean not just that nobody reads your article, but that the incoming links are from, well, nobodies. According to my sources, Google’s algorithms have now evolved to the point where there are just two main mechanisms for generating the good Google juice (and they are related):

  1. Write something original and good, not seen anywhere else on the web.
  2. Get some incoming links from web sites with good Google-juice, such as the New York Times, Boing Boing, a well-known university or, well, any of the “Big 10” domains (Wikipedia, Amazon, Youtube, Facebook, eBay (2 versions), Yelp, WebMD, Walmart, and Target.)

The importance of the top domains is increasing, as seen by this chart from mozcast.com:

image

In other words, search engines are moving towards the same strategy for determining what is important as the rest of the world has – if it garners the attention of the movers and shakers (and, importantly, is not a copy of something else) it must be important and hence, worthy of your attention.

For the serious companies (and publishers) out there, this is good news: Write well and interesting, and you will be rewarded with more readers and more influence. This also means that companies seeking to boost their web presence may be well advised to hire good writers and create good content, rather than resort to all kinds of shady tricks – duplication of content, acquired traffic (including hiring people to search Google and click on your links and ads), and backlinks from serially created WordPress sites.

For writers, this may be good news – perhaps there is a future for good writing and serious journalism after all. The difference is that now you write to be found original by a search engine – and should a more august publication with a human behind it see what you write and publish it, that will just be a nice bonus.

MIT CISR Research Briefing on Enterprise Search

imageLast year I had the pleasure of spending time as a visiting scholar at MIT Center for Information Systems Research, and here is one of the results, now available from CISR’s website (free, but you have to register. Absolutely worth it, lots of great material):

Research Briefing: Making Enterprise Search Work: From Simple Search Box to Big Data Navigation; Andersen; Nov 15, 2012

Most executives believe that their companies would perform better if “we only knew what we knew.” One path to such an objective is enhanced enterprise search. In this month’s briefing, “Making Enterprise Search Work: From Simple Search Box to Big Data Navigation,” Visiting Scholar Espen Andersen highlights findings from his research on enterprise search. He notes that enterprise search plays a different role from general web or site-specific searches and it comes with its own unique set of challenges – most notably privacy. But companies trying to capitalize on their knowledge will invariably find search an essential tool. Espen offers practical advice on how to develop a more powerful search capability in your firm.

Why is internal search so hard?

Have experience or an opinion? I would love to talk to you!

In collaboration with MIT CISR, I am currently researching enterprise search – i.e., the use of search engines inside corporations, whether it be for letting people outside the corporation search your website, or for letting employees search the internal collection of databases, documents, and audiovisual material. Consumer search – our everyday use of Google and other search engines – in general is very good and very fast, to the point where most people search for stuff rather than categorize it. Enterprise search, on the other hand, is often imprecise, confusing, incomplete and just not as good a source of information as searching the open Internet.

There are many reasons for this, both having to do with the content (most enterprise content lack hyperlinks, essential for prioritization, for instance), with the organization (lack of resources for and knowledge of search optimization, security policy issues, lack of an identified application owner), and with the users (who are to few to get meaningful statistics and do not, to the extent you do on the Internet, make their information findable).

Nevertheless, there are examples of companies – often consulting companies, research-oriented firms and others who deal in large amounts of information, such as pharmaceuticals and publishers, who do good work with internal, enterprise search. I have interviewed a few of those and a few search experts.

Now I would very much like to talk to anyone interesting in this topic – do you have experience, viewpoints, war stories, examples, ideas about what to do and especially what not to do? Then I am very interested in talking to you! Please leave a comment below or end me an email at self@espen.com.

How students search

David Weinberger has posted his notes from a very interesting session at Berkman that I for some reason missed – Alison Head’s presentation of studies of students’ information search behavior from the Project Information Literacy project. The findings confirm a lot of what I would have thought just by observing my own (young adult) children’s search behavior, or, for that matter, my own. Wikipedia is used a lot, and quite intelligently, in the beginning of a search. You talk to librarians and other people to get the vocabulary necessary for a search. And students (and everyone else) wants one database, not many.

Jeff Jarvis on his public parts

(taking notes from a presentation at Harvard Law School’s Berkman Center, December 6, 2011)

(David Weinberger has a much better writeup.)

Jeff Jarvis rake thin, grey-haired, dressed in black and bearded, and has had cancer, but any similarity with Steve Jobs stops there. His latest book, Public Parts, advocates more openness in a time concerned with privacy yet somehow unable to press that “like” button on Facebook.

His key point is that the tools of publicness need to be protected – and though privacy and publicness is not in opposition – and his fear that privacy concerns are misapplied and sometimes dangerous.

When Kodak was invented, there were articles written about “fiendish Kodakers” lying in wait, and the cameras were banned in some public parks. Anxiety about privacy goes back to the Gutenberg press, microphones, video cameras. Society is looking for norms, but legislates to keep the past, in terms of the past.

The tools of making publics: Habermas said public discourse started in coffee houses in the 18th century as a counterweight to government power. It was ruined by mass media. Now we have the tools of publicness, and we get things like Occupy Wall Street. Jeff started (after a few glasses of Pinot Noir) the #fuckyouwashington tag, which spawned a platform with more than 110,000 tweets.

The Gutenberg parentheses: Before Gutenberg, knowledge was non-linear, with Gutenberg it became linear, after Gutenberg it is non-linear and the knowledge we revere is the net. Danish professors arguing that the transition into Gutenberg was hard, and the transition out of it will be equally hard. Web content still shaped as analogues of the past.

Had to understand what privacy is – first take was that it had something to do with control. Came to think that privacy is an ethic. This means that publicness is also an ethic, an ethic of sharing information. Sharing his prostate cancer, including impotence, on the web. Hard to do, but got tremendous value out of it.  Various people contributed to the blog, telling things that the doctors won’t say, etc.

We need to learn from young people how to control sharing. Danah Boyd: COPA requires companies not to keep information about children younger than 13. But more than 50% of 12-year olds had Facebook – “on the internet everyone’s 14.” Sullivan principles (developed for apartheid) may help: Rules for companies to operate in South Africa.

Jarvis propose some principles:

  1. We have the right to connect.
  2. We have the right to speak.
  3. We have the right to assemble and to act.
  4. Privacy is an ethic of knowing
  5. Publicness is an ethic of sharing
  6. Our institutions’ information should be public by default, secret by necessity
  7. What is public is a public good
  8. All bits are created equal
  9. Internet must stay open and distributed

Fear that governments and companies will take this away.

Various questions in the question round – but the discussion didn’t really take off.

Jeff comes off somewhat like his books: Well articulated and with many interesting and well described examples, but I keep looking for some more analysis and less description. More depth, simply, not just a plea that openness is good and we need to develop norms on how to handle it. But the “history of the private and the public” part of his book is very good. And it does make for an interesting read.

Computers taking over: Examples

I am currently thinking about how computers are taking over more and more of what we humans can do, in ways we did not think about just a few years ago. The impetus for this, of course, is Brynjolfsson og McAfee’s recent e-book Race Against The Machine, where the main examples given are Google’s driverless cars, instant translation software, and automated paralegal research. I’ll use this blog post as a repository for examples of this, so here goes:

Continue reading

Record companies lose, artists gain

In early September, two of my M.Sc. students handed in their thesis, which has created quite a stir in the Norwegian music industry. I think this has applicability outside Norway, so here is a translation (and light edit) of the Norwegian-language press release and a link to the full report (PDF, 3,4Mb):

After 10 years of digitalization of music, the average (Norwegian) musician’s income has increased by 66%. As a group, the only losers in digital music seems to be the record companies. This is the conclusion of a M.Sc. study done by students Richard Bjerkøe and Anders Sørbo at the Norwegian School of Management BI in Oslo.

The thesis “The Norwegian Music Industry in the Age of Digitalization” shows that the musicians’ income increase is due to increased income from concerts, various collection agencies and stipends from the government in the period from 1999 to 2009. During the same period, record sales have decreased by about 50%. The fall in income from record sales is less important for the musicians, however, since, on average, they only receive 15% of record sales, whereas they receive on average 50% from concerts and 80% from collection agencies (who collects provisions from radio play and other uses of the artists’ productions.)

– In the interviews we have done with a number of musicians and music producers, the musicians say they are losing money on digitalization, but the numers show that it is the record companies, not the artists, who are losing, says Bjerkøe og Sørbo.

– The fall in record sales also means that record companies are becoming less important as launchpads for new artists, and that records to a larger degree become “business cards” – i.e., a marketing tool – to attract audiences to concerts.

Espen Andersen, associate professor at the Norwegian School of Management, has been the faculty advisor for the thesis. He thinks the results show that artists in the future will have more of their income from concerts and by being played on the radio, TV or Internet streaming services. Musicians will also, to a larger extent, have to take responsibility for their own marketing. The future of the record companies is uncertain and they will need to redefine their role in the music industry.

Facts:

  • Income from concerts has increased, on average, 136% from 1999 to 2009
  • Income from collection organizations such as TONO, Gramo and others has increased 108% from 1999 to 2009
  • stipends and other supports from the government has increased 154% from 1999 to 2009
  • The number of registered active musicians has increased by about 28% during this period
  • All figures have been adjusted for inflation.

For questions, please contact

  • Richard Bjerkøe, +47 9181 8686, rbjerkoe@gmail.com
  • Anders Sørbo, +47 9284 0098, anders.sorbo@no.experian.com
  • Espen Andersen, +47 4641 0452, self@espen.com

Liberating the process followers

I highly recommend attending the following presentation at the Norwegian Polytechnic Society on Wednesday September 29th at 5 pm. In particular, I think anyone associated with creating systems for complex decision making, especially in the public sector, would find it interesting.

Update Oct.1: You will find a video of the talk here.

Innovative systems for public services: From process following to problem solving
Dr. Richard Pawson, Naked Objects Group

image Dr. Richard Pawson is founder of Naked Objects Group, and a former head of Research Services for Computer Sciences Corporation. He has 30 years experience in IT and related businesses, and has given presentations and consulted with companies all over the world. He holds a Ph.D. of computer science from Trinity University, Dublin, Ireland.

In this discussion, he will talk about how new innovative systems can change how case workers in public services do their job, transforming them from process followers to problem solvers. Richard has designed and implemented a large and very successful system for the Irish Department of Social and Family Affairs which, based on a Norwegian idea of object orientation, allows case workers ("saksbehandlere") to handle very complex problem situations under much larger degrees of freedom than previously possible.

Richard is a highly interesting and entertaining speaker with deep insight in the relationship between information technology and organizational issues. You can look forward to an eye-opening perspective on the organizational issues in public services and how innovative and advanced systems can contribute to solving them.

Does LinkedIn help or disrupt headhunters?

(I am looking for a M.Sc. student(s) to research this question for his/her/their thesis.)

The first users of LinkedIn were, as far as I can tell, headhunters (at least the first users with 500+ contacts and premium subscriptions.) It makes sense – after all, having a large network of professionals in many companies is a requirement for a headhunter, and LinkedIn certainly makes it easy not only to manage the contacts and keep in touch with them, but also allows access to each individual contact’s network. However, LinkedIn (and, of course, other services such as Facebook, Plaxo, etc.) offers its services to all, making connections visible and to a certain extent enabling anyone with a contact network and some patience to find people that might be candidates for a position.

I suspect that the evolution of the relationship between headhunters and LinkedIn is a bit like that of fixed-line telephone companies to cell phones: In the early days, they were welcomed because they extended the network and was an important source of additional traffic. Eventually, like a cuckoo’s egg, the new technology replaced the old one. Cell phones have now begun to replace fixed lines. Will LinkedIn and similar professional networks replace headhunters?

If you ask the headhunters, you will hear that finding contacts is only a small part of their value proposition – what you really pay for is the ability to find the right candidate, of making sure that this person is both competent, motivated and available, and that this kind of activity cannot be outsourced or automated via some computer network. They will grudgingly acknowledge that LinkedIn can help find candidates for lower-level and middle-management, but that for the really important positions, you will need the network, judgment and evaluative processes of a headhunting company.

On the other hand, if you has HR departments charged with finding people, they will tell you that LinkedIn and to a certain extent Facebook is the greatest thing since sliced bread when it comes to finding people quickly, to vet candidates (sometimes discovering youthful indiscretions) and to establish relationships. I have heard people enthuse over not having to use headhunters anymore.

So, the incumbents see it as a low-quality irrelevance, the users see it as a useful and cheap replacement. To me, this sounds suspiciously like a disruption in the making, especially since, in the wake of the financial crisis, companies are looking to save money and the HR departments dearly would like to provide more value for less money, since they are often marginalized in the corporation.

I would like to find out if this is the case – and am therefore looking for a student or two who would like to do their Master’s thesis on this topic, under my supervision. The research will be funded through the iAD Center for Research-based innovation. Ideally, I would want students who want to research this with a high degree of rigor (perhaps getting into network analysis tools) but I am also willing to talk to people who want to do it with more traditional research approaches – say, a combination of a questionnaire and interviews/case descriptions of how LinkedIn is used by headhunters, HR departments and candidates looking for new challenges.

So – if you are interested – please contact me via email at self@espen.com. Hope to hear from you!

GRA6821 Fall 2010 – some pointers

To anyone taking (or thinking about taking) my GRA6821 (Technology Strategy, or whatever the name is) course this fall – here are a few things that, at least at this point, are going to happen:

  • Since there will be many students at the course (about 70 so far) it has been split into two sessions. The course will be on Thursdays in classroom C2-040. The students will be split into two groups (more about that later, I am looking for a good mix of backgrounds), one of which will start at 0800, and one at 1100. The groups will alternate every week. Teaching will be case-based, meaning that you as a student have to show up, have a name card, and be in the same seat for every class.
  • For some lectures, classes may be merged (for instance, if we have a guest lecturer, the first class may start an hour later, the second an hour earlier – and the guest lecturer will not have to do the same talk twice).
  • We will have a couple of "special" classes, so far two are relatively confirmed:
    • One (tentatively scheduled for September 16th) will involve the iAD project, an advanced search technology research project hosted by FAST/Microsoft Enterprise Search. Our visitors will be a team of researchers from UCD/DCU Ireland, demonstrating video search on Apple iPads. As part of the program, students will participate as experimental users of the system.
    • The second, probably towards the end of September, will involve McKinsey, the consulting company, with discussions about consulting in a technology-rich environment. McKinsey has a global practice of "business technology" and will use expertise from that area in an excercise involving technology case analysis.
    • Possibly we will have other, similar events. And definitely some exciting guest lecturers.
  • For those of you wishing to prepare early, take a look at the previous courses arranged (last year’s here). The two main books (Information Rules and The Innovator’s Solution) are available in paper and electronic form from many sources, and a good idea might be to get at them early and read them over the summer. The other literature will be either from web sources or made electronically available via BIs new learning platform, It’s Learning (more about this later) or another platform.
  • Evaluation will, as usual, be a combination of classroom participation, smaller assignments during the course, and a final paper. New this year is the form of the final paper – this will be a case description of a Norwegian technology company, which the students can chose from a list (provided later) and written up in a specified format. These case descriptions will go in as research material for the project "A Knowledge-based Norway", preferably under the "information technology" part study. They will by students in pairs and delivered in a collaborative context, using some form of social software such as Ning, WordPress, Google Docs or Origo.

I am very much looking forward to an course that hope and think will be fun, interesting and useful both to take and teach. And until August 19th, I wish you all a very good summer!

Stephen Wolfram’s computable universe

I love Wolfram Alpha and think it has deep implications for our relationship with information, indeed our use of language both in a human-computer interaction sense and as a vehicle for passing information to each other.

In this video from TED2010, Stephen Wolfram lays out (and his language and presentation had developed considerably since Alpha was launched a year ago) where Alpha fits as an exploration of a computable universe, enabling the experimental marriage of the precision of mathematics with the messiness of the real world.

This video is both radical and incremental: Radical in its bold statement that a thought experiment such as computable universes (see Neal Stephenson’s In the beginning was…the command line, specifically the last chapter, for an entertaining explanation) actually could be generated and investigated is as radical as anything Wolfram has ever proposed. The idea of democratization of programming, on the other hand, is as old as COBOL – and I don’t think Alpha or Mathematica is going to provide it – though it might go some way, particularly if Alpha gains some market share and the idea of computing things in real time rather than accessing stored computations takes hold.

Anyway – see the video, enjoy the spark of ideas you get from it – and try out Wolfram Alpha. My best candidate for the "insert brief insightful summary research" button I always have been looking for on my keyboard.

Reflections on Accenture’s Technology Lab

I am writing this from Accenture’s Sophia Antipolis location, where I am visiting with a group of executive students taking a course called Strategic Business Development and Innovation (the second time, incidentally, last year’s notes are here). Much of this course is around how to use technology (in a very wide sense of the word) to do innovation in organizations. To turn this into practice, my colleague Ragnvald Sannes and I run the course as an innovation process in itself – the students declare an innovation project early in the course, and we take them through the whole process from idea to implementation plan. To further make this concrete, we collaborate with Accenture (chiefly with Kirsti Kierulf, Director of Innovation in Norway) to show the students some of the technologies that are available.

Accenture Technology Labs is a world-wide, relatively small part of Accenture’s systems integration and technology practice, charged with developing showcases and prototypes in the early stage where Accenture’s clients are not yet willing to fund development. While most consulting companies have this kind of activity, I like Accenture’s approach because they are very focused on putting technology into context – they don’t develop Powerpoints (well, they do that, too) but prototypes, which they can show customers. I see the effect on my students: I can explain technology to them (such as mobility, biometrics, collaboration platforms) but they don’t see the importance until it is packaged into, say, the Next Generation Bank Branch or an automated passport control gate.

Making things concrete – telling a story through hands-on examples – is more important than what most companies think. When it comes to technology, this is relatively simple: You take either your own technology, if you are a technology provider, and build example applications of it. If you are vendor-agnostic, like Accenture, you take technology from many vendors and showcase the integration. If your technology is software-based, or consists of process innovations, then you showcase your own uses of it. Here in Sophia we have seen how Accenture uses collaboration platforms internally in the organization, for instance. (Otherwise known as eating your own dog food.)

Having a physical location is also very important. At the Norwegian School of Management, we have a library that we like to showcase – a "library of the future" where the students have flexible work areas, wireless access to all kinds of information, in an attractive setting. This looks nice on brochures, but also allows us to highlight that the school is about learning and research, and allows us to tell that story in a coherent manner. I see Accenture as doing the same thing with their labs – they develop technology, but also showcase the activity and its results to the rest of the world. The showcasing has perceived utility, generating the funds and managerial attention (or, perhaps, inattention) necessary to sustain the prototype-producing capability.

Quite a difference from slides and lunch meetings, I say. And rather refreshing. An example that more companies should follow.

GRA6821 Eleventh lecture: Search technology and innovation

(Friday 13th November – 0830-about 1200, room A2-075)

FAST is a Norwegian software company that was acquired by Microsoft about a year and a half ago. In this class (held with an EMBA class, we will hear presentations from people in FAST, from Accenture, and from BI. The idea is to showcase a research initiative, to learn something about search technology, and to see how a software company accesses the market in cooperation with partners.

To prepare for this meeting, it is a good idea to read up on search technology, both from a technical and business perspective. Do this by looking for literature on your own – but here are a few pointers, both to individual articles, blogs, and other resources:

Articles:

  • How search engines work: Start with Wikipedia on web search engines, go from there.
  • Brin, S. and L. Page (1998). The Anatomy of a Large-Scale Hypertextual Web Search Engine. Seventh International WWW Conference, Brisbane, Australia. (PDF). The paper that started Google.
  • Rangaswamy, A., C. L. Giles, et al. (2009). "A Strategic Perspective on Search Engines: Thought Candies for Practitioners and Researchers." Journal of Interactive Marketing 23: 49-60. (in Blackboard). Excellent overview of some strategic issues around search technology.
  • Ghemawat, S., H. Gobioff, et al. (2003). The Google File System. ACM Symposium on Operating Systems Principles, ACM. (this is medium-to-heavy-duty computer science – I don’t expect you to understand this in detail, but not the difference of this system to a normal database system: The search system is optimized towards an enormous number of queries (reads) but relatively few insertions of data (writes), as opposed to a database, which is optimized towards handling data insertion fast and well.)
  • These articles on Google and others.

Blogs

Others

Longer stuff, such as books:

  • Barroso, L. A. and U. Hölzle (2009). The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines. Synthesis Lectures on Computer Architecture. M. D. Hill, Morgan & Claypool. (Excellent piece on how to design a warehouse-scale data center – i.e., how do these Google-monsters really work?)
  • Weinberger, D. (2007). Everything is Miscellaneous: The Power of the New Digital Disorder. New York, Henry Holt and Company. Brilliant on how the availability of search changes our relationship to information.
  • Morville, P. (2005). Ambient Findability, O’Reilly. See this blog post.
  • Batelle, J. (2005). The Search: How Google and Its Rivals Rewrote the Rules of Business and Transformed Our Culture. London, UK, Penguin Portfolio. See this blog post.

What is Technology Strategy?

I run a research center called Centre for Technology Strategy at the Norwegian School of Management. Inevitably, the question comes up – what is technology strategy?

In my mind, the question is simple and comes down to two things: The realization that most changes in the world are due to changes in technology, and, hence, it is vitally important for managers to understand how technology evolves and how this evolution impacts their companies.

I like to illustrate this with a diagram of such mind-boggling simplicity that it is almost embarrassing to present it here. On the other hand, it seldom fails to inform when I use it in presentations – and a number of my collaborators through the years like it enough to use it in theirs:

image

In words: Technology drivers – i.e., changes in how we do things – changes the business environment, which again imposes changes in strategies on companies. Technology strategy aims to enable companies to understand the technology drivers to be able to change their strategies before they are forced to by the business environment.

This is by no means easy. It may be hard to understand what the drivers are – if you were a producer of travel alarm clocks, would you have foreseen the use of cell phones as alarm clocks? And though the drivers may be easy to understand, you may under- or overestimate the time it takes before your business environment changes. Lastly, it may be easy to understand both the change and the timing, but just hard to deal with the change itself. Newspapers and book publishers, for instance, can easily see what is happening to the music industry, understand how the business environment is changing, yet find themselves repeating the errors of the music industry because the changes necessary goes against the norms and values of those of power, as well as their technology and their business model.

To understand technology strategy, of course, you need also to understand the current business environment – in terms of the technology currently used – and how it shapes current strategy. And you need to have an understanding of technology evolution in general and the evolution of technology in your industry in particular. Lastly, you need an understanding of how to change technology inside organizations – something which requires an understanding of not just changing technology, but also organizational structures, incentive systems, and norms and values.

(part I of a series of short and rather irreverent articles on various aspects of Technology Strategy)

Are social networks a help or a threat to headhunters?

In a currently hot Youtube video which breathlessly evangelizes the revolutionary nature of social networks, I found this statement: "80% of companies are using LinkedIn as their primary tool to find employees". In the comments this is corrected to "80 percent of companies use or are planning to use social networking to find and attract candidates this year", which sounds rather more believable. Social media is where the young people (and, eventually, us in the middle ages as well) are, so that is where you should look.

At the same time, many of the most prolific users of LinkedIn (and, at least according to this guy, Twitter), both in terms of number of contacts and other activities, are headhunters. It is these people’s business to know many people and be able to find someone who matches a company’s demands.

image Headhunters are the proverbial networkers – they derive their value from knowing not just many people, but the right people. In particular, headhunters that know people in many places are valuable, because they would then be the only conduit between one group and another. Your network is more valuable the fewer of your contacts are also in contact with each other.

The American sociologist Ronald S. Burt, in his book Structural Holes: The Social Networks of Competition (1992), showed that social capital accrues to those who not only know many people, but have connections across groups. Or, in other words, if everyone had been directly linked, you would have a dense network structure. The fact that we aren’t, means that there are structural holes – hence the term. In the picture to the right, we see a social network of 9 individuals. Person A here derives social capital from being the link two groups that otherwise are only internally connected. A would be an excellent headhunter here. (Much as profits only can be generated if you can locate market imperfections).

LinkedIn is a social networks, indistinguishable from a regular one (i.e., one that is not digitally facilitated) except that you can search across the network, directly up to three levels away, indirectly a bit further. Headhunters like it for this reason, and use it extensively in the early phases of locating a candidate. The trouble is, LinkedIn (not to mention the tendency of more and more people having their CV online on regular websites) makes searching for candidates easy for everyone else as well. In other words – while initially helpful, is the long term result of this searchability that headhunters will no long be necessary.

Search technology – in social networks as well as in general – lowers the transaction cost of finding something. Lower transaction costs favors coordination by markets rather than hierarchy (or, in this case, network). Hence, the value of having a central position in that network should diminish. On the other hand, search technology (in networks in particular) allows you to extend your network, hence increase your social capital. Which effect is stronger remains to be seen.

Anyway, this should make for interesting research. Anyone out there in headhunterland interested in talking to me about their use of these tools?

Plagiarism showcased – and a call for action

image I hate plagiarism, partially because it has happened to me, partially because I publish way too little because I overly self-criticize for lack of original thinking, partly because I have had it happen with quite a few students and am getting more and more tired of having to explain even to executive students with serious job experience that clipping somebody else’s text and showing it as your own is not permissible – this year, I even had a student copy things out of Wikipedia and argue that it wasn’t plagiarism because Wikipedia is not copyrighted.

I suspect plagiarism is a bigger problem than we think. The most recent spat is noted in Boing Boing – read the comments if you want a good laugh and some serious discussion. (My observation, not particularly original: Even if this thing wasn’t plagiarized, isn’t this rather thin for a doctoral dissertation?)

The thing is, plagiarism will come back to bite you, and with the search tools out there, I can see a point in a not too distant future where all academic articles ever published will be fed into a plagiarism checker, with very interesting results. Quite a few careers will end, no doubt after much huffing and puffing. Johannes Gehrke and friends at Cornell have already done work on this for computer science articles – I just can’t wait to see what will come out of tools like these when they really get cranking. I seem to remember Johannes as saying that most people don’t plagiarize, but that a few seem to do it quite a lot.

It is high time we turn the student control protocols loose on published academic work as well. Nothing like a many eyeballs to dig out that shallowness….

A wave of Google

This presentation from the Google I/O conference is an 80-minute demonstration of a really interesting collaborative tool that very successfully blends the look and feel of regular tools (email, Twitter) with the embeddedness and immediacy of Wikis and share documents. I am quite excited about this and hope it makes it out in the consumer space and does not just rest inside single organizations – collaborative spaces can create a world of many walled gardens, and being a person that works as much between organizations as in them.

Google wave really shows the power of centralized processing and storage. Here are some things I noted and liked:

  • immediate updating (broadcast) to all clients, keystroke by keystroke
  • embedded, fully editable information objects
  • history awareness (playback interactions)
  • central storage and broadcast means you can edit information objects and have the changes reflect back to previous views, which gives a pretty good indication that the architecture of this system is a tape of interactions played forward
  • concurrent collaborative editing (I want this! No more refreshes!)
  • cool extensions, such as a context-aware spell checker, an immediate link creator, concurrent searcher
  • programs are seen as participants much like humans
  • easy developer model, all you need to do is edit objects and store them back
  • client-side and server-side API
  • interactions with outside systems

I can see some strategic drivers behind this: Google is very much threatened by walled gardens such as Facebook, and this could be a great way of breaking that open (remember, programs go from applications to platforms to protocols, and this is a platform built over OpenSocial, which jams open walled gardens). This could just perhaps be what I need to be able to more effectively work over several organizations. Just can’t wait to try this out when it finally arrives.

From surfing the net to surfing the waves….

Update: Here is the Google Blog entry describing Wave from Lars Rasmussen.

The datacenter is the new mainframe

From Greg Linden comes a link and a reference to a very interesting book by two Google engineers: The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines (PDF, 2.8Mb) by Luiz André Barroso and Urs Hölzle. This is a fascinating introduction to data center design, with useful discussions of architecture, how to do cooling and reduce power use (it turns out, for instance, that getting computers that use power proportionally to their level of use is extremely important).

I suspect that even highly experienced data center designers will find something useful here. The book is written for someone with some degree of technical expertise, but you do not need a deep background in computer science to find much here that is interesting and useful.

One of my recurring ideas (and I am by no means alone in thinking this) is that the Norwegian west coast, with its cool climate, relatively abundant hydroelectric energy and underused industrial infrastructure (we used to have lots of electrochemical and electrometallurgical plants) could be a great place to do most of Europe’s computing. Currently we sell our electric energy to Europe through power lines, which incurs a large energy loss. Moving data centers to Norway and distributing their functionality through fiberoptic cables seems a much more effective way of doing things to me, especially since that region of the country has a reasonable supply both of energy engineers and industrial workers with the skill set and discipline to run that kind of operation.

Now, if I could only find some investors…

From links to seeds: Edging towards the semantic web

Wolfram Alpha just may take us one step closer to the elusive Semantic Web, by evolving a communication protocol out of its query terms.

(this is very much in ruminating form – comments welcome)

Wolfram Alpha officially launched on May 18, an exciting new kind of "computational" search engine which, rather than looking up documents where your questions have been answered before, actually computes the answer. The difference, as Stephen Wolfram himself has said, is that if you ask what the distance is to the moon, Google and other search engines will find you documents that tells you the average distance, whereas Wolfram Alpha will calculate what the distance is right now, and tell you that, in addition to many other facts (such as the average). Wolfram Alpha does not store answers, but creates them every time. And it does primarily answer numerical, computable questions.

The difference between Google (and other search engines) and Wolfram Alpha is not so clear-cut, of course. If you ask Google "17 mpg in liters per 100km" it will calculate the result for you. And you can send Wolfram Alpha non-computational queries such as "Norway" and it will give an informational answer. The difference lies more in what kind of data the two services work against, and how they determine what to show you: Google crawls the web, tracking links and monitoring user responses, in a sense asking every page and every user of their services what they think about all web pages (mostly, of course, we don’t think anything about most of them, but in principle we do.) Wolfram Alpha works against a database of facts with a set of defined computational algorithms – it stores less and derives more. (That being said, they will both answer the question "what is the answer to life, the universe and everything" the same way….)

While the technical differences are important and interesting, the real difference between WA and Google lies in what kind of questions they can answer – to use Clayton Christensen’s concept, the different jobs you would hire them to do. You would hire Google to figuring out information, introduction, background and concepts – or to find that email you didn’t bother filing away in the correct folder. You would hire Alpha to answer precise questions and get the facts, rather than what the web collectively has decided is the facts.

The meaning of it all

Now – what will the long-term impact of Alpha be? Google has made us replace categorization with search – we no longer bother filing things away and remembering them, for we can find them with a few half-remembered keywords, relying on sophisticated query front-end processing and the fact that most of our not that great minds think depressingly alike. Wolfram Alpha, on the other hand, is quite a different animal. Back in the 80s, I once saw someone exhort their not very digital readers to think of the personal computer as a "friendly assistant who is quite stupid in everything but mathematics."  Wolfram Alpha is quite a bit smarter than that, of course, but the fact is that we now have access to this service which, quite simply, will do the math and look up the facts for us. Our own personal Hermione Granger, as it is.

I think the long-term impact of Wolfram Alpha will be to further something that may not have started with Google, but certainly became apparent with them: The use of search terms (or, if you will, seeds) as references. It is already common to, rather than writing out a URL, to help people find something by saying "Google this and you will find it". I have a couple of blogs and a web page, but googling my name will get you there faster (and you can misspell my last name and still not miss.) The risk in doing that, of course, is that something can intervene. As I read (in this paper) General Motors, a few years ago, had an ad for a new Pontiac model, at the end of which they exhorted the audience to "Google Pontiac" to find out more. Mazda quickly set up a web page with Pontiac in it, bought some keywords on Google, and quite literally Shanghaied GM’s ad.

Wolfram Alpha, on the other hand, will, given the same input, return the same answer every time. If the answer should change, it is because the underlying data has changed (or, extremely rarely, because somebody figured out a new way of calculating it.) It would not be because someone external to the company has figured out a way to game the system. This means that we can use references to Wolfram Alpha as shorthand – enter "budget surplus" in Wolfram Alpha, and the results will stare you in the face. In the sense that math is a language for expressing certain concepts in a very terse and precise language, Wolfram Alpha seeds will, I think, emerge as a notation for referring to factual information.

A short detour into graffiti

Back in the early-to-mid-90s, Apple launched one of the first pen-based PDAs, the Apple Newton. The Newton was, for its time, an amazing technology, but for once Apple screwed it up, largely because they tried to make the device do too much. One important issue was the handwriting recognition software – it would let you write in your own handwriting, and then try to interpret it. I am a physician’s son, and I certainly took after my father in the handwriting department. Newton could not make sense of my scribbles, even if I tried to behave, and, given that handwriting recognition is hard, it took a long time doing it. I bought one, and then sent it back. Then the Palm Pilot came, and became the device to get.

The Palm Pilot did not recognize handwriting – it demanded that you, the user, wrote to it in a sign language called Graffiti, which recognized individual characters. Most of the characters resembled the regular characters enough that you could guess what they were, for the others you either had to consult a small plastic card or experiment. The feedback was rapid, to experimenting usually worked well, and pretty soon you had learned – or, rather, your hand had learned – to enter the Graffiti characters rapidly and accurately.

Wolfram Alpha works in the same way as Graffiti did: As Steven Wolfram says in his talk at the Berkman Center, people start out writing natural language but pretty quickly trim it down to just the key concepts (a process known in search technology circles as "anti-phrasing".) In other words, by dint of patience and experimentation, we (or, at least, some of us) will learn to write queries in a notation that Wolfram Alpha understands, much like our hands learned Graffiti.

From links to seeds to semantics

Semantics is really about symbols and shorthand – a word is created as shorthand for a more complicated concept by a process of internalization. When learning a language, rapid feedback helps (which is why I th
ink it is easier to learn a language with a strict and terse grammar rather than a permissive one), simplicity helps, and a structure and culture that allows for creating new words by relying on shared context and intuitive combinations (see this great video with Stephen Fry and Jonathan Ross on language creation for some great examples.)

And this is what we need to do – gather around Wolfram Alpha and figure out the best way of interacting with the system -and then conduct "what if" analysis of what happens if we change the input just a little. To a certain extent, it is happening already, starting with people finding Easter Eggs – little jokes developers leave in programs for users to find. Pretty soon we will start figuring out the notation, and you will see web pages use Wolfram Alpha queries first as references, then as modules, then as dynamic elements.

It is sort of quirky when humans start to exchange query seeds (or search terms, if you will).  It gets downright interesting when computers start doing it. It would also be part of an ongoing evolution of gradually increasing meaningfulness of computer messaging.

When computers – or, if you will, programs – needed to exchange information in the early days, they did it in a machine-efficient manner – information was passed using shared memory addresses, hexadecimal codes, assembler instructions and other terse and efficient, but humanly unreadable encoding schemes. Sometime in the early 80s, computers were getting powerful enough that the exchanges gradually could be done in human-readable format – the SMTP protocol, for instance, a standard for exchanging email, could be read and even hand-built by humans (as I remember doing in 1985, to send email outside the company network I was on.) The world wide web, conceived in the early 90s and live to a wider audience in 1994, had at its core an addressing system – the URL – which could be used as a general way of conversing between computers, no matter what their operating system or languages. (To the technology purists out there – yes, WWW relies on a whole slew of other standards as well, but I am trying to make a point here) It was rather inefficient from a machine communication perspective, but very flexible and easy to understand for developers and users alike. Over time, it has been refined from pure exchange of information to the sophisticated exchanges needed to make sure it really is you when you log into your online bank – essentially by increasing the sophistication of the HTML markup language towards standards such as XML, where you can send over not just instructions and data but also definitions and metadata.

The much-discussed semantic web is the natural continuation of this evolution – programming further and further away from the metal, if you will. Human requests for information from each other are imprecise but rely on shared understanding of what is going on, ability to interpret results in context, and a willingness to use many clues and requests for clarification to arrive at a desired result. Observe two humans interacting over the telephone – they can have deep and rich discussions, but as soon as the conversation involves computers, they default to slow and simple communication protocols: Spelling words out (sometimes using the international phonetic alphabet), going back and forth about where to apply mouse clicks and keystrokes, double-checking to avoid mistakes. We just aren’t that good at communicating as computers – but can the computers eventually get good enough to communicate with us?

I think the solution lies in mutual adaptation, and the exchange of references to data and information in other terms than direct document addresses may just be the key to achieving that. Increases in performance and functionality of computers have always progressed in a punctuated equilibrium fashion, alternating between integrated and modular architectures. The first mainframes were integrated with simple terminal interfaces, which gave way to client-server architectures (exchanging SQL requests), which gave way to highly modular TCP/IP-based architectures (exchanging URLs), which may give way to mainframe-like semi-integrated data centers. I think those data centers will exchange information at a higher semantic level than any of the others – and Wolfram Alpha, with its terse but precise query structure may just be the way to get there.