John Stuckey has written a good review of Nicholas Carr’s Does IT Matter? in ACM Ubiquity.
Carr’s 2003 HBR article (called “IT doesn’t matter”, an aggressive title that the content did not reflect) and even the book is rather tiresome. One thing is that this discussion is a repeat – in 1990 it was Max Hopper (a man with real IT pedigree and understanding) with his “Rattling SABRE: New ways to compete with information” (Harvard Business Review (May-June): 118-125.), in which he made an offhand remark that owning the technology didn’t matter any more and got flack for it. Hopper was right, of course (though he managed to get a few more years out of owning SABRE). So was Mata, Fuerst and Barney, who analyzed the competitive impact of IT (Mata, F. J., W. L. Fuerst, et al. (1995). “Information Technology and Sustained Competitive Advantage: A Resource-Based Analysis.” MIS Quarterly 19(4): 487-505.) arguing that it was not technology, but the way you use it, that matters. Not to mention Erik Brynjolfsson‘s many excellent articles on the measurable effects of the technology (see, for instance: Hitt, L. and E. Brynjolfsson (1996). “Productivity, Business Profitability, and Consumer Surplus: Three Different Measures of Information Technology Value.” MIS Quarterly 20(2): 121-142.)
That Carr should get so much out of repeating the tired old truth – that it is not the technology, but how you use it, that matters – in a sensationalist packing in 2003 just indicates how shallow the understanding of technology is with investors and many managers. As one manager told me during the go-go dot-com years: All the investments we see in IT now are not really in IT – they are in misdirected marketing, buying market share you can neither defend nor make money on.
Oh well. Plus ça change…
According to Red Herring Blog, Blackboard.com is now worth over $.5b. Truly depressing. The best thing that can be said for their product, a course management system for universitities, is that the competition is even worse. Why anyone would want to value a company that sells shoddy software to universities (not known for willingly parting with huge sums for stuff they can either do themselves or get cheaply somewhere else,) is beyond me. Software-as-service is an excellent concept (not that Blackboard compares to salesforce.com, since they license software much as anyone else,) but the business model is shaky in the long term. The last time it was bandied about it was called ASPs, and it didn’t work then either. The reason is simple: Software concepts are copyable, and ideas spread, so prices fall until you no longer make money unless you manage to establish and defend some sort of network externality. For course management systems, the only direct network externality I have been able to find is in plagiarism detection – and even there it is only slight, as the effect of plagiarims is mostly preemptive.
So why don’t I like Blackboard and other course management systems? Here is a list (modified from this talk):
- All they aim to do is put paper-based courses on-line – without no vision of what teaching should be like beyond Powerpoint slides. They justify this by saying they need to pander to technology virgins – saying that to facilitate learning and increase acceptance they have to use the tired metaphors of “classrooms”, “group areas”, “documents” and other hogwash. Hobgoblins of little minds, I say.
- They dumb down the user interface for the course creator (and for the student, I should add) to unusability, and justifies this by talking about user-friendliness and how the system can be used by almost anyone with a minimum of training. That is well and good, but it is now 37 years since Smalltalk 67 was invented, with a graphically oriented object browser, and since I manage not one course, but about 10, I would very much like to do that in an interface that allows me to behave like a technologically competent and moderately literate adult, rather than forcing me into dealing with an interface designed for an airline check-in machine catering to moronic, alcohol-marinated charter tourists.
- They are built on a systems architecture that, while using a relational database, does not let the capabilities of the database show through to the user, thus treating every course as a separate entity with ensuing duplication of content. So I end up with 6 different copies of an article because I have 6 courses, and an updating nightmare
- They are not built to interface effectively and intuitively with other systems. This is rather odd, given that the data for the one task that justifies a course management system – namely, identity management, that is the linking of student to learning material – comes not from the course management system itself but from student administration systems
- They arrange everything into courses, not making any allowance for the course creator or creators to have draft areas, repositories of learning modules, conditional access to course material or alternative interfaces to, say, non-students.
In short, Blackboard and its ilk are bad systems, because they do not have the capability in them to drive use of IT in teaching forward. All they do is take half a step, and in doing so, making sure we won’t get further.
This is an area where open source/free systems really ought to shine, where componentization, blogs, wikis and a rolling technological evolution – including, as the report Thwarted Innovation: What Happened to e-learning and Why by Robert Zemsky and William F. Massy says, evolving a dominant design for learning components – is critical. But it won’t come from the commercial providers. We have to do it ourselves. For instance, what if MIT took their OpenCourseWare initiative and created a way for people to specify their courses in the same format, even creating learning components by themselves the same way they do it?
Boy, wouldn’t that be fun…..
Stupid management books (from what Andy Pettigrew calls the “Heathrow School of Management”) are irritating. It seems to me that the more moronic the book, the more it sells. And if you think they are irritating as a reader, imagine what they are to serious academics.
Head of this trend has been “Who moved my cheese”, a leadership tale with mice scurrying around, spouting wisdom about being open to new ideas. Now there is a copycat (copyrat?), I suppose because the author was open to new ideas, called “Squirrel, Inc.” by Stephen Denning. The Economist has given this “book” the treatment it deserves:
ONCE upon a time there was a lemming called Stephen. He was a very clever lemming: top rodents would ask for his advice before setting off to lead their fellows on long and difficult journeys. One day, however, Stephen woke up with a start to find that the left-hand side of his brain, the part he used to analyse his fellow rodents’ problems, had gone completely dead. He was reduced to using something that he hardly knew existed: the right-hand side, the part that allows lemmings to do wildly dotty things without rhyme or reason.
And it was then that it came to him in a flash: he would help to save the whole lemming race by writing a book. It would be a short book, but it would look like a long one because it would have lots of blank pages and many quotations in big type. It would be an allegory about animals, because Spencer had done one about mice that had been a huge hit, and George, of course, had transformed the genre into literature with his tales of life down on the farm. […]
The irritating thing, of course, is that books like these are being read by the management meatballs of this world, to quote P.J. O’Rourke. Incidentally, I suspect The Economist’s Global Executive page
, which among other things summarizes noteworthy journal articles and books, is a holding pen for stuff that for some reason or other didn’t make it into the magazine. There are nuggets to be found there, as the rodent review shows.
But the question remains: Why do stupid and simplistic management books – books that the thoughtful and well grounded managers I tend to talk to wouldn’t touch with a ten-foot grounded earth excavator – sell so well?
Bob Cringely is going against the grain, as usual (and quite refreshingly so, too), in his recent column on why telcos should keep their circuit-switched network. I am not sure how this is done in the US, but at least in Norway, the seemingly circuit-switched POTS network is actually fibre-based and packet-switched in the background, the only circuit-switched bit being the “last mile”, so I am not sure how his arguments would work here. To me it seems, like it does to Techdirt, that this is an argument for telcos hanging on to their dying technology as it is being disrupted – though the “super-customer” forcing the incumbent to hang on to the old technology seems not to be there.
Anyway, the interesting part is Bob’s argument that it should be possible to reduce the bandwidth needed for sending video by mimicking the protocol of the optic nerve, which according to Bob has a capacity of 100 Kbps. Now, I am no expert in optics or anatomy of the eye, but it occurs to me that one of the reasons the eye can do with relatively limited bandwidth is because so much filtering takes place before the picture gets transmitted. When I look at a movie, I don’t take in the whole picture at once – I focus on some part of it, and am only dimly aware of the rest. I do this by positioning my eyes towards what I am looking at and then focusing it – in the process selecting just a few bits of all millions of “bits” the world insists on sending towards me in analog form. What I focus on comes through in glorious detail (at least when I have my glasses on) and the surrounding stuff is out of focus (and there are different physical sensors in the eye to handle this – two protocols, if you like).
Now, since no two people focus on the same part of the video picture through a movie, you either need to send the whole picture with the same quality, or you need to establish some form of two-way communication, so that only what the eye actually will look at will be transmitted towards it. So in other words, to send video down a 64Kbps connection, you need it to be two-way, with almost zero latency. Moreover, you would need one connection for each viewer.
The eye selects what to see, in communication with the brain. The world, which sends images to the eye, has unlimited bandwidth. I may be wrong, but in order to send video over 64K, as Bob proposes, it seems to me we need to extend the selection properties of the eye into the server rather than the send the whole image into each person’s home – and communicating eye-tracking with focus information from each individual viewer back to be processed in time for the image to be selected and transmitted in optimized format seems to me to be a formidable challenge, circuit switching or not.
So, to me it looks like packet switching is the way to go, also for technical reasons. A second argument is that telcos make money when users talk to users, not when they connect to a central feed (that is the domain of broadcasters, and for telcos normally only a way to get users to subscribe to their network, until talking to each other takes over for talking to a central source). And packet switching allows for video-to-video sharing (legal or not). That is the future, and whether the telcos and broadcasters make the transition, I don’t know and I don’t care. I know I, as a user, will. The circuit switched network will not.
Update June 30:
David Isenberg has a rather more direct take on this in his “I[diot], Cringely“.
Together with Jay Williams (who, incidentally, I have now worked with for three years and only seen in person once – I didn’t know what he looked like until April this year) I did a Concours Group CIO Staff Meeting on collaborative systems yesterday. One of the parts was a discussion of blogs and wikis in the corporate space – what can they be used for and how do you manage them? Some thoughts:
- To make shared self-publishing work within a corporation, you need a purpose (why are you doing it, aside from self-expression) and a shared set of norms and values. According to this article in Business Week, Microsoft now has let loose blogging within the company (made visible through Channel 9), but (at least as far as I can see) with no stated purpose and the only rule being “don’t be stupid”. Where is the purpose, and, though that may not be visible on the outside, where is the space for development of shared understanding of what is OK and what is not?
- Blogging and wikis (incidentally, the latter I think will be most important in the short run) is emerging as tools for corporate use very much like the Web did in 1994: As something that is first done for academic reasons, then by individuals, then by corporations. In the process, a number of expensive knowledge management and content management tools will disappear.
- The chief problem for IT shops will not be the technology (that is trivial, especially since much of the management is taken care of through simple forms of version control) but the softer parts of the equation: How do you establish a culture for sharing and for making your (half-baked) ideas visible? Unless IS has a voice (and I have seen a number of IS departments hiring communications people lately) it will have little power to manage the use of the tools. Shared spaces are managed through norms rather than rules, and the usual remedies of user access restrictions and various meta-data based automated rules will be counter-productive.
I look forward to a lot of corporate Wiki’ing and many internal blogs – both are examples of how our uses of the technology are catching up with the technology’s capabilities. (In other words, both have no counterpart in a paper-based world, unlike email, databases and mailing lists.) They will evolve, they will be used inside corporations, and someone will make money on adding the corporate flair that makes those of us who execute for a living comfortable in the knowledge that we are in control. Or seem to be.
Blogging is fun, but it is also a time sink, especially if your closet nerdiness takes over and you start fiddling with the design of the site, as I have done lately. It is now ready: Fairly conservative, with three columns, Google AdSense installed, comments and trackbacks and other details taken care of. Remaining: Adding some images and logos, with links. Downside: Fiddling with code takes time. Upside: I am now beginning to understand CSS, have established backup routines (which, incidentally, ought to be a part of MT) and the experience from that part can be used for my real home page, perhaps also for my courses. Definitely for my Norwegian blog. Time to ditch frames. And to get productive on other things…..
Incidentally: Minor irritation: The editing screens for MT have buttons for adding hyperlinks, images etc. – but these seem to only work in Microsoft IE, which is Not a Good Thing, especially given this article from Techdirt.
Incidentally II: MT Blacklist seems to be working – the amount of spam does not do down much, but at least it is a one-or-two-or-occasionally-three-click operation to get rid of them. Anklebiters.
Simson Garfinkel is right on the money when, reflecting on the Tablet PC Nonrevolution, he says that “it seems that tablet PCs spend a large part of their lives serving as traditional laptops, with the stylus snug in its holster while the keyboard gets a vigorous workout.” Definitely my impression and experience as well.
I have a Toshiba T3500, which is a Tablet PC with a keyboard and a swivel screen. When deciding which model to get, I was torn between the tablet model and the sleek M100, an “executive” machine without tablet functionality, but lighter, with a slightly better screen, and slightly larger keyboard.
Do I regret the choice? I am not sure, I have used the drawing functionality of the tablet a bit, have tried to enter and edit text (still too clumsy, as Simson says), and have occasionally surfed in tablet mode, especially when reading long documents on the web. Playing FreeCell using a pen was great, but I have deleted that game to up my productivitiy a bit…..
The technology is not quite there yet – kind of like Windows 2.11, which gave an inkling of what would come (with Palm playing the role of Apple), but with too slow response time and not enough well-behaved programs. OneNote needs to be an integral part of Windows, the editing facilities need to be more intuitive (backspacing with a pen is too cumbersome, I want to scratch things out) and the drawing format must be more integrated into file formats, especially in Powerpoint. (I tried making a “handwriting” presentation in Powerpoint, to illustrate using new technology in traditional ways, but the drawing interface was too cumbersome and lacked pressure sensitivity. Pasting in things from Windows Journal took too long….)
Aside from that, the Toshiba gets suspiciously hot and has developed a rattle in the fan. Time to back up and call Toshiba customer service…..