Author Archives: Espen

Unknown's avatar

About Espen

For details, see www.espen.com.

A walk around Infosys’ Hyderabad campus

As part of a three-week visit at the Indian School of Business, I saw Infosys‘ campus in Hyderabad last Friday. It is extremely impressive, with park-like surroundings. My friend and colleague Ramiro Montealegre and I met with a group of managers in the Enterprise Solutions practice, as part of a joint research project. We then joined a group of ISB students for a presentation of the Hyderabad operation and a tour around the campus.

The Hyderabad campus houses 8,000 employees (or "infoscions", as Infosys terms them) and last year did export business to the tune of $250m. The campus (which by no means is Infosys’ largest, that is in Bangalore and Mysore) has training facilities with on-campus accomodation for 700 students. There are two large food courts, the obligatory cricket ground, mini-golf and all kinds of recreational facilities for the increasingly hard-to-recruit engineers (though they do receive 1.65 million resumes per year.)

I will let the pictures speak for themselves – it is quite a complex. 

Continue reading

Homeland Security bars Rodrigo (y Gabriela) Sanchez

This is rather hard to believe, but apparently, the US Homeland Security department has decided that Rodrigo Sanchez, the melodic half of famous guitar duo Rodrigo y Gabriela, has the same name as someone barred from entering the United States, and therefore barred him from entering the US. Consequently, the couple has had to postpone or cancel a number of shows they were going to have in the US.

Aside from the fact that Rodrigo & Gabriela are world famous and have been on Letterman and the Tonight Show with Jay Leno, you would think that neither "Rodrigo" nor "Sanchez" are unusual names in Mexico, or for that matter in any Spanish-speaking country.

Hard to believe.

In the meantime, check out this fun interview with music:

Fill in the form, ye huddled masses….

Interesting op-ed in FT yesterday, about how the bureaucratic and seemingly unfriendly immigration services in the US is seen as creating problems for industry. Bill Gates has testified before Congress that the best and brightest now have a choice – and that the US needs to grease the skids a bit, so to speak.

Personally, I have found that the best vehicle for rapid entry into the US is to have a 6 month old baby with a US passport and an American flag jump suit. Smiles all around. Too bad she has grown older and less mobile as a ticket in…

As for unfriendliness – yes, the immigration officers can be a hassle, but so they are all over the world (with the possible exception of the UK and Singapore.) But factor in the work environment: Dealing with an endless stream of jet lagged, hypoglycemic, disshevelled and supremely self-important passengers making fun of your sacred forms, and even the most patient mind will start growing spikes just for self-preservation. See it from their side….

A tenured squirt

After reading Stephen Levitt’s musings on whether tenure is a good thing or not in economics, I can’t resist quoting Daniel Dennett (from the incomparable Consciousness Explained, 1991, page 177:

The juvenile sea squirt wanders through the sea searching for a suitable rock or hunk of coral to cling to and make its home for life.  For this task, it has a rudimentary nervous system.  When it finds its spot and takes root, it doesn’t need its brain anymore, so it eats it!

(It’s rather like getting tenure.)

Just one perspective, of course. But useful.

Firefox tracks phishing attempts

I normally have no problem weeding out phishing attempts, but today I inadvertently clicked on a mail (not a link) which opened a phishing page in my browser. Firefox was right on it, however, and this was the result:

Firefox phishing warning 

I rather like this functionality – which I didn’t even know Firefox had – and I really like the design of the warning: Shade the offending page, display the warning prominently, and give the user the option to decied (both for this and, optionally, for the future) whether this really is a phishing attempt or not.

Excellent!

Not exactly news

Funny article about how users are creating shadow IT departments using gadgets and downloadable software to create productivity boosters deemed non-kosher by IT, who fears these invasions of their turf.

This has been a problem and a source of innovation forever – Digital Computer used to sell their computers as "engineering equipment back in the early 80s so that invidividual departments could get around corporate computing centre standards. Absolutely nothing new here.

Though, of course, with Google tools, downloadable software and cheap computers, you could actually end up having more processing power and LOCs than central IT, if you worked on it. Sobering thoughts for many CIOs, I bet….

(Via Slashdot.) 

Guilt and slavery

Slave ship deckIt is approaching 200 years since the slave trade was abolished in Britain (and thereby, effectively, in the world). The Economist has an excellent analysis of the slave trade and what brought it down, comparing it to the Holocaust in the ability of the "normal" society to disregard what was going on. The campaign against slavery was remarkably successful, but depended on many factors, not least the gradual understanding that slave rebellions eventually would bring down the practice, at least in the West Indies. Once England abolished slavery, it became the chief enforcer against it.

I remember, from a business history class, another analogy to the Holocaust – the fact that most free people in the United States did not understand the suffering of the slaves because their picture of it was skewed. In terms of numbers, most of the slaves were at large farms where they were driven like cattle by forement with whips. But to most of the free population, the slaves they met were more likely house slaves – tenants and workers of small farms. These were treated better – and so the perspective of slavery was even more polarized between the slaves and the free than the judicial distinction would imply. Just like the Holocaust, the worst parts of the system were hidden from most people, who instead saw the less unacceptable side of separatism and (at least on the surface) only slightly worse conditions.

The article is interesting because it cool-headedly analyzes the slave trade and its abolishion as phenomena, and does not shrink pointing at some unwelcome facts, such as the involvement of African chiefs, or from drawing connections to the present times. Recommended.

Math as delivered by the hapless student

A former student, perhaps sensing a need for a counterweight to my essay on why math is good for you, sent me these answers to math problems for students for whom motivation probably isn’t enough (though they don’t lack creativity.) Enjoy.Find X

 

Continue reading

Economics of abundance

Great article by Mike (whoever that might be) at Techdirt: If you can’t compete with free, you can’t compete, period. His argument is that there really is no difference selling digital and physical goods as long as scarcity is removed – price will over time move to marginal cost, and whether that cost is 0 or something higher doesn’t matter:

Say I own a factory that cost me $100 million to build (fixed cost) and it produces cars that each cost $20,000 to build (marginal cost). If the market is perfectly competitive, then eventually I’m going to be forced to sell those cars at $20,000 — leaving no profit. Now, let’s look at a different situation. Let’s say that I want to make a movie. It costs me $100 million to make the movie (fixed cost) and copies of that movie each cost me $0 (marginal cost — assuming digital distribution and that bandwidth and computing power are also fixed costs). Now, again, if the market is competitive and I’m forced to price at marginal cost, then the scenario is identical to the automobile factory. My net outlay is $100 million. My profit is zero. Every new item I make brings back in cash exactly what it costs to make the copy — so the net result is the same. It’s no different that the good is priced at $0 or $20,000 — so long as the market is competitive.

On the surface, this has validity from the producers’ point of view, but how about the demand side? If I have to shell out $20,000 for a new car, that is money I could spend om something else, no matter the profit. If I can get all the movies I want for free, I can consume as many of them as I have time for, at zero short-term cost to myself. The relationship between price and demand is not linear, at least not on an individual basis. And here comes the psychology part: If I decide to shell out 20K for a car, I will make a careful decision, compare features, and try to get as much as possible for my money. If the price of a movie is 0, I will just download a lot of movies and keep them around. The only investment will be in actually watching them, meaning that somehow the company selling them has to get value out of my watching the movie rather than purchasing it.

In-line advertising, it seems, coupled with a way to track actual viewings rather than purchases. Google Adsense in movies, click-on-the-hero’s-coat-and-buy-it, instant delivery and on-demand video services delivered over the web, taking responsibility for the whole viewing experience.

Now, if only my broadband service were up to snuff….

Mothership Internet

Jon Udell has an interesting discussion of social network software: That it will be subsumed into the general Internet over time. LinkedIN, for one thing, is now a fairly well closed off professional network, useful if you want to get in touch with someone at a particular company, and free of spammers, unless you count the headhunters with 5000+ connections. Incidentally, just like the early academic networks, back in the 80s.

The trouble with networks is keeping them at the right size and with the right nodes. When it grows to large, smaller groups will secede and form subnets, often with dimensions added to the connections between them. That’s the way it is with LinkedIN as well: It started off with very strict limitations on what you could do. Then, if you pay for the premium account, you can send emails that shortcircuits the chain of connections and ask questions that pop up on people’s home screens.

I have 287 connections on LinkedIN, and have been pretty stringent about keeping them to people I have actually interacted with, enough so that I would remember them. The network is useful, but the growing size of the overall network means that some sort of differentiated contact settings are needed. Eventually, LinkedIN will be too large to manage, the ratio of useful information to crap will fall, and people will ignore it, except for certain subnets, tightly interlinked and with many mutual recommendations. Just like the blogosphere three years ago, the Well in early days, and the various nets back in the 80s.

Hang on to you own website, I say. Manage it well. As long as the network management tools keep getting overwhelmed, make sure you node is polished, well lit, and carefully selective where its links goes.

Web 2.0 animated

I can only agree with Jill – I wish I had done this video (that Michael Welsh did). With a small exception: The first three of the last four words, which pushes it a tiny bit over the edge.

Phisheries report

Wired has an interesting 25-page article on how FBI did a sting operation involving a "flipped" card scam bulleting board operator. Reminiscent of Markoff’s books about the various hunts for Kevin Mitnick and other mistfits.

I can’t help but notice what sad sacks these "hackers" and "cyber criminals" turn out to be. Not much of Al Capone over them, just overweight and pale board jockeys sitting up all night hammering out badly spelled instant messages. 

Dan Bricklin’s WikiCalc

…is in release 1.0 (blogpost) and available here. Haven’t checked it out yet, but I will – collaborative spreadsheet authoring would seem a great tool for research use, and perhaps also for making data available for public analysis.

Google as super-Akamai

Cringely speculates about what Google is going to do with all those data centers, and how to counter the strategy he thinks they are following by creating an intra-ISP P2P network. His description of why Microsoft won’t compete with Google even though they can is a pretty good example of disruptiveness in action.

Jeff Bezos at MIT

Excellent presentation by Jeff Bezos on Amazon’s three new services: S3 (cloud of storage), Mechanical Turk (cloud of humans, sort of a small-task Google Answers), and Elastic Compute Cloud (cloud of processing). Bezos is an unapologetic geek – then again, in that crowd, he can be. Fun.

Why don’t CIOs become CEOs?

Interesting thread at Slashdot, based on a vacuous article about why CIOs don’t become CEOs.

Here is my comment as I posted it:

I found May’s utterances utterly superficial and very old-fashioned. First of all, though not many CEOs come from the post as CIO, many top executives have had a stint in the IT part of the organization, learning about what the technology does and how to think about the company as a value producing system. As for the use of Peters and Waterman’s book as a sort of criteria for what constitutes a good CEO, that is laughable – the book refers to companies, not CEOs. And it has been utterly debunked many times.

May’s article would have been right on about 10 years ago. Back then, the reason CIOs did not make it into the top spot (in fact, 40% of CIOs were fired, and the average tenure was about 2.3 years) was because they did not understand that the skills that got you into the top IT position were not the skills that would keep you there.

In CSC, we conducted a survey of CIOs in the top 1000 corporations in the world, and got a surprising result: Of the 40% of CIOs that were fired, only a few were fired for "not producing cost-effective IT". The rest went because they could not communicate with the top management group, were ineffective change agents, or could not contribute to business strategy development. In other words, when you are a CIO in a large corporation, you are no longer the IT organization’s representative into top management, but the other way around. If you cannot make the transition to thinking about the whole company as a system, you are toast.

There are actually quite a few companies where technical competence is visible in top management. Royal Bank of Scotland comes to mind, with one of the most effective IT strategies in the world (a central "manufacturing division" handles transactions, IT, HR, and call centers, leaving the branch network – and they have more than 50 brands – to serve customers and grow the business. UPS, Wal-Mart, Amex, Wells Fargo, Royal Bank of Canada, some of the large airlines, Dell, quite a number of telcos, some insurance companies (I particularly like a German one called AMB which runs many insurance companies off the same systems) and many others have top management that understand the impact of IT and think of the whole company as a system. Does that mean that they come from the position as CIO? Not necessarily, though some of them do.

Being a CIO is about information, not IT. For that, you have a CTO or a CIO office that handles the technical pieces. Most of the top CIOs I know worry about the customers and the customer experience. One CIO of a large hotel chain worries about the speed of the in-room Internet connection – and whether the ventilation is good enough that you can shave after taking a shower without the mirror getting fogged up.

IT is a tool. It is an important tool, but it is what it does for the customer that matters. And the role of the IT organization is not to make IT elegant – it is to make business elegant. If the tools happen to be boring and the CIO not very visible – so what?

The simpler database

The relational database model, initially formulated by Codd in the late 70s, is the dominant way to store structured data at present. However, queries to a relational database are fairly slow compared to complex queries towards a search engine index, which (in addition to the poor user interface of relational databases, often mapped directly to the table structure) has meant that search engines are now competitor for the job of extracting subsets from structured data. This, as well as the difficulty of mapping object structures onto a relational database leads me to wonder if not only the query interface, but in fact the whole concept of the relational database may soon face a disruptive threat from simpler structures.

A conversation at a Concours brainstorming session set me dusting off vaguely remembered concepts such as hierarchical and network databases, as well as the old difficulty of storing a multi-component concept with class attributes in a relational database (once characterized by the Economist as trying to store a car by putting the wheels under "W" and the engine under "E"). I found an good introductory text to the associative model of data by Simon Williams here (registration required), which also sums up the differences between the various data models pretty well.

Long-term, systems will be simulations of the reality they are to manage, but realities will also be influenced by the differences in processing capacity between humans and machines. A great example of the latter is the concept of random pick, which is well explained by Chris Anderson’s post about the shoe company  Zappo’s, which stores shoes randomly (with each pair’s UPC scanned) as they come in. The result is a less optimized pick, but the total effort is less than sorting pairs coming in. Another effect, of course, is that they spend the least amount of effort on those products that move the least. (Though I love the comment about whether they occassionally have to defragment the warehouse.)

My apologies for this nerdy detour, I am writing a paper on, among other things, search technology, as well as participating in an interesting Concours project on Enterprise Architecture.

With that, back to our regular programming…

I want one of those!

http://services.brightcove.com/services/viewer/federated_f8/271543545
Read more about this technology and the company behind it here.

Competing on Analytics

Just went off the phone from a Concours teleconference with Tom Davenport, Bob Morison and about 50 other participants. The topic was Competing on analytics, which was also the title of a Harvard Business Review article he wrote in January last year, and which forms the basis for a book of the same title coming out soon. Concours is launching a membership program called the Business Analytics Concours (glossy brochure here). The upshot is a revivial and refocus on business analytics. Following a number of examples of companies (in particular the Harrah Casino) that have had success due to their ability to relentlessly analyze and optimize their performance, value offerings and market opportunities.

Tom is an interesting character and a prolific writer on knowledge management, process optimization and knowledge worker productivity.  He outlined how companies that compete on analytics tend to share certain attributes, such as senior management advocacy, an enterprise approach to analysis (rather than letting the thousand analysts bloom), going first deep and then broad, and paying attention to the development of a strong analytical capability.

My role was to comment and to discuss the technology side: While everyone recognizes that competing on analytics is a question of culture, understanding of the business environment and analytical skill and drive, there is a technology side to it as well. What kind of technologies can enable analysis and optimization, what emerging technologies should IT be monitoring and experimenting with in this space, and how would an enterprise architecture for a company with an analytical bent look different from most companies’ architectures today?

Here are my notes:

  • the obvious technologies needed are repositories for data, such as data warehouses and datamarts, as well as business intelligence software for analyzing it
  • on a more abstract level, we need technologies that allow for rapid collection (most data is out there in digital form, but it needs to be made analyzeable), structuring (hopefully avoiding human intervention in the data cleanup phase, which is costly) and analysis of a wide range of data (which very often turns into experimentation)
  • in particular, we need technologies that let peple develop models from operational data, and redo structuring and categorization in a dynamic and shared way (did anyone say wiki?)
  • a short-term path to better analytics may be search technology, which gives access to unstructured data, allows joining of many sources, and does not require rearchitecting and a massive job of initial categorization and structuring
  • a sizeable investment in data preparation will kill the analytical impulse in its birth
  • long-term, there are interesting possibilities in the kind of data exchange protocols visualized by Van Jacobson, i.e. a form of networks that makes data location irrelevant and pathways hidden 
  • lastly: We have to realize that this is a cultural, strategic and managerial issue, and that almost any technology can be used in an analytical way. If you are not inclined to analyze your environment, no amount of technology is going to make you do it. In fact, more technology can distance you from the real world, and make you give in to the temptation of letting people have pre-saved spreadsheets and fixed models rather than the ability to analyse
  • an ideal would be to have experimental facilities, where things could be sim’ed out, complete with a button labeled "Make it happen".

This looks like an interesting project, because it goes right to the heart of what companies must get better at in a world where information spreads rapidly, imitation is easy, and you compete on your evolving optimization and innovation capabilty rather than invididual technologies or services.

Jacobson on the data overlay web

Van Jakobson gives a great talk at Google about the need to create the next generation web, this time in the form of an overarching data exchange protocol. His argument is that we should do to the Internet what the Internet did to proprietary networks: Overlay it with a higher degree of abstraction. This time, it is the data, where data self-authenticates and you can use any distribution mechanism, any kind of underlying network. http://video.google.com/googleplayer.swf?docId=-6972678839686672840&hl=en
(If in-post video doesn’t work, try this link.)

Great history of the web, excellent abstraction. Here is his concluding slide:

  • IP rescued us from plumbing at the wire level but we still have to do it at the data level. A dissemination based architecture would fix this.
  • Many ad-hoc dissemination overlays have been created (Akamai CDN, BitTorrent, Sonos mesh, Apple Rendevous) – there’s a demonstrated need.
  • If we are going to have a future, we should rescue some grad students from re-inventing the past.

 (Via Paul Kedrosky, who notes a wonderful little anecdote about Copernicus and his early predictions from the heliocentric system. Not a bad example of a disruptive technology, as it were.)