Enough said, watch it. As a colleague twittered: This will change computing.
(That being said, this is a very poor filming – there are no pictures of the screen, aside from a glimmer now and then.)
Enough said, watch it. As a colleague twittered: This will change computing.
(That being said, this is a very poor filming – there are no pictures of the screen, aside from a glimmer now and then.)
These are my raw notes from the session with Stephen Wolfram on the pre-launch of the Wolfram Alpha service at the Berkman center. Unfortunately, I was on a really bad Internet connection and only got the sound, and missed the first 20 minutes or so running around trying to find something better.
Notes from Stephen Wolfram on Alpha debut
…discussion of queries:
- nutrition in a slice of cheddar
- height of Mount Everest divided by length of Golden Gate bridge
- what’s the next item in this sequence
- type in a random number, see what it knows about it
- "next total solar eclipse"
What is the technology?
- computes things, it is harder to find answers on the web the more specifically you ask
- instead, we try to compute using all kinds of formulas and models created from science and package it so that we can walk up to a web site and have it provide the answer
- four pieces of technology:
– data curation, trillions of pieces of curated data, free/licensed, feeds, verify and clean this (curate), built industrial data curation line, much of it requires human domain expertise, but you need curated data
– algorithms: methods and models, expressed in Mathematica, there is a finite number of methods and models, but it is a large number…. now 5-6 million lines of math code
– linguistic analysis to understand input, no manual or documentation, have to interpret natural language. This is a little bit different from trad NL processing. working with more limited set of symbols and words. Many new methods, has turned out that ambiguity is not such a bit problem once we have mapped it onto a symbolic representation
– ability to automate presentation of things. What do you show people so they can cognitively grasp what you are, requires computational esthetics, domain knowledge.
Will run on 10k CPUs, using Grid Mathematica.
90% of the shelves in a typical reference library we have a decent start on
provide something authoritative and then give references to something upstream that is
know about ranges of values for things, can deal with that
try to give footnotes as best we can
Q: how do you deal with keeping data current
- many people have data and want to make it available
- mechanism to contribute data and mechanism for us to audit it
first instance is for humans to interact with it
there will be a variance of APIs,
intention to have a personalizable version of Alpha
metadata standards: when we open up our data repository mechanism, wn we use that can make data available
Questions from audience:
Differences of opinion in science?
- we try to give a footnote
- Most people are not exposed to science and engineering, you can do this without being a scientist
How much will you charge for this?
- website will be free
- corporate sponsors will be there as well, in sidebars
- we will know what kind of questions people ask, how can we ingest vendor information and make it available, need a wall of auditing
- professional version, subscription service
Can you combine databases, for instance to compute total mass of people in England?
- probably not automatically…
- can derive it
- "mass of people in England"
- we are working on the splat page, what happens when it doesn’t know, tries to break the query down into manageable parts
300th largest country in Europe? – answers "no known countries"
Data sources? Population of Internet users. how do you choose?
- identifying good sources is a key problem
- we try do it well, use experts, compare
- US government typically does a really good job
- we provide source information
- have personally been on the phone with many experts, is the data knowable?
- "based on available mortality data" or something
Technology focus in the future, aside from data curation?
- all of them need to be pushed forward
- more, better, faster of what we have, deeper into the data
- being able to deal with longer and more complicated linguistics
- being able to take pseudocode
- being able to take raw data or image input
- it takes me 5-10 years to understand what the next step is in a project…
How do you see this in contrast with semantic web?
- if the semantic web had been there, this would be much easier
- most of our data is not from the web, but from databases
- within Wolfram Alpha we have a symbolic ontology, didn’t create this as top down, mostly bottom-up from domains, merged them together when we realized similarities
- would like to do some semantic web things, expose our ontological mechanisms
At what point can we look at the formal specs for these ontologies?
- good news: All in symbolic mathematical code
- injecting new knowledge is complicated – nl is surprisingly messy, such as new terms coming in, for instance putting in people and there is this guy called "50 cent"
- exposure of ontology will happen
- the more words you need to describe the question, the harder it is
- there are holes in the data, hope that people will be motivated to fill them in
Social network? Communities?
- interesting, don’t know yet
How about more popular knowledge?
- who is the tallest of Britney Spears and 50 cent
- popular knowledge is more shallowly computable than scientific information
- linguistic horrors, book names and such, much of it clashes
- will need some popularity index, use Wikipedia a lot, can determine whether a person is important or not
The meaning of life? 42….
Integration with CYC?
- CYC is most advanced common sense reasoning system
- CYC takes what they reason about things and make it computing strengths
- human reasoning not that good when it comes to physics, more like Newton and using math
Will you provide the code?
- in Mathematica, code tends to be succinct enough that you can read it
- state of the art of synthesizing human-readable theorems is not that good yet
- humans are less efficient than automated and quantitative qa methods
- in many cases you can just ask it for the formula
- our pride lies in the integration, not in the models, for they come from the world
- "show formula"
Will this be integrated into Mathematica?
- future version will have a special mode, linguistic analysis, pop it to the server, can use the computation
How much more work on the natural language side?
- we don’t know
- pretty good at removing linguistic fluff, have to be careful
- when you look at people interacting with the system, but pretty soon they get lazy, only type in the things they need to know
- word order irrelevant, queries get pared down, we see deep structure of language
- but we don’t know how much further we need to go
How does this change the landscape of public access to knowledge?
- proprietary databases: challenge is make the right kind of deal
- we have been pretty successful
- we can convince them to make it casually available, but we would have to be careful that the whole thing can’t be lifted out
- we have yet to learn all the issues here
- have been pleasantly surprised by the extent to which people have given access
- there is a lot of genuinely good public data out there
This is a proprietary system – how do you feel about a wiki solution outcompeting you?
- that would be great, but
- making this thing is not easy, many parts, not just shovel in a lot of data
- Wikipedia is fantastic, but it has gone in particular directions. If you are looking for systematic data, properties of chemicals, for instance, over the course of the next two years, they get modified and there is not consistency left
- the most useful thing about Wikipedia is the folk knowledge you get there, what are things called, what is popular
- have thought about how to franchise out, it is not that easy
- by the way, it is free anyway…
- will we be inundated by new data? Encouraged by good automated curation pipelines. I like to believe that an ecosystem will develop, we can scale up.
- if you want this to work well, you can’t have 10K people feeding things in, you need central leadership
- "map of the cat" (this is what I call artificial stupidity)
- does not know anatomy yet
- how realtime is stock data? One minute delayed, some limitations
- there will be many novelty queries, but after that dies down, we are left with people who will want to use this every day
How will you feel if Google presents your results as part of their results?
- there are synergies
- we are generating things on the fly, this is not exposable to search engines
- one way to do it could be to prescan the search stream and see if wolfram alpha can have a chance to answer this
Role for academia?
- academia no longer accumulates data, useful for the world, but not for the university
- it is a shame that this has been seen as less academically respectable
- when chemistry was young, people went out and looked at every possible molecule
- this is much to computer complicated for the typical libraries
- historical antecedents may be Leibniz’ mechanical and computational calculators, he had the idea, but 300 years too early
When do we go live?
… a few weeks
- maybe a webcast if we dare…
This rather frightening article by Nicholas Eberstadt from World Affairs looks into the causes of Russian depopulation and falling life expectancy over the last 50 years or so. Russia is depopulating at a rate only found in really troubled countries in Africa, and the cause is the high mortality, in particular, young men:
According to the U.S. Census Bureau International Data Base for 2007, Russia ranked 164 out of 226 globally in overall life expectancy. Russia is below Bolivia, South America’s poorest (and least healthy) country and lower than Iraq and India, but somewhat higher than Pakistan. For females, the Russian Federation life expectancy will not be as high as in Nicaragua, Morocco, or Egypt. For males, it will be in the same league as that of Cambodia, Ghana, and Eritrea.
In the face of today’s exceptionally elevated mortality levels for Russia’s young adults, it is no wonder that an unspecified proportion of the country’s would-be mothers and fathers respond by opting for fewer offspring than they would otherwise desire. To a degree not generally appreciated, Russia’s current fertility crisis is a consequence of its mortality crisis.
The reason is binge alcoholism (on average, one bottle of vodka per week, according to some experts), HIV, tuberculosis, accidents and violence: "No literate and urban society in the modern world faces a risk of deaths from injuries comparable to the one that Russia experiences." The consequences are dire:
In the contemporary international economy, one additional year of life expectancy at birth is associated with an increase in per capita output of about 8 percent. A decade of lost life expectancy improvement would correspond to the loss of a doubling of per capita income. By this standard, Russia’s economic as well as its demographic future is in jeopardy.
So, how to mitigate this – as the author sees few and recommends no solutions?
From part the first:
Technology professionals have long struggled with getting a complex message across to management. In our honest and unguarded moments, we talk of "dumbing it down for the suits." But the challenge is more subtle than that. We need to repackage the argument to work within the frame of oral thought.
In addition to helping the analytically biased see the value of creating a compelling story, you need to help them see how and why story works differently than analysis. The best stories to drive change are not complex, literary, novels. They are epic poetry; tapping into archetypes and cliché, acknowledging tradition, grounded in the particular.
…which, of course, is why personalized examples work so well. (And work so badly when not connected to a logical argument or important point.)
In other words – there should be plenty of work for all those laid-off journalists in companies, trying to find le mot juste that will transform the numbingly complex into the directionally intuitive.
Read the whole thing – if nothing else, for the language.
An excellent and truly scary article by Margaret Talbot in the New Yorker about the use of neuroenhancers by people who are not ill. Which is comparable to recreational plastic surgery, which I don’t like either.
Is it just me, or is cheating seen as more and more normal and not to be punished or even held in contempt? When I catch students plagiarizing (which happens with a depressing frequency, partly because the tools for doing so have gotten so much better) their defense is more and more that this is normal, that you cannot expect them to come up with something original when everything is available out there on Google and Wikipedia. My retort is that I need to judge them on their own work, not others’, and that they therefore need to make it clear to me what they have done themselves and what they have found somewhere else. And their answer is that they put "Source: Wikipedia" at the bottom and therefore they are scot free, so there.
I would get angry if this wasn’t so depressing and so pointless. I am tempted to just fail them. Not for plagiarism – which entails disciplinary committees and all sorts of make-work. Rather an F for outright stupidity.
It is some consolation that creativity is one area where neuroenhancers don’t seem to work. But they might, as the article finds, help these modern-day multitaskers concentrate on one specific task (hoping that it is a productive one and not, say, obsessively alphabetizing your library.) But neuroenhancers won’t make your ideas better – they won’t assist in spotting the prey, only in bringing it home. In the most dreary way possible:
Every era, it seems, has its own defining drug. Neuroenhancers are perfectly suited for the anxiety of white-collar competition in a floundering economy. And they have a synergistic relationship with our multiplying digital technologies: the more gadgets we own, the more distracted we become, and the more we need help in order to focus. The experience that neuroenhancement offers is not, for the most part, about opening the doors of perception, or about breaking the bonds of the self, or about experiencing a surge of genius. It’s about squeezing out an extra few hours to finish those sales figures when you’d really rather collapse into bed; getting a B instead of a B-minus on the final exam in a lecture class where you spent half your time texting; cramming for the G.R.E.s at night, because the information-industry job you got after college turned out to be deadening. Neuroenhancers don’t offer freedom. Rather, they facilitate a pinched, unromantic, grindingly efficient form of productivity.
If you find that tempting, be my guest. I am sure you can find directions via Google.
Jon Udell has a great presentation over at Slideshare on how to work in observable spaces – something that should be done, to a much larger extent, by academics. I quite agree (and really need to get better at this myself):
Bill Schiano and I, between ourselves, solved this one pretty quickly. (That is, we found the computer names, not the extra thing, not mentioned on the site.)
(Incidentally, I also found SAGE, which was a pretty important computer system in its own right (as well as a computer company.). Also UNIX, CEC 80 (which at least sounds like a computer) and "rank" and "crib". Oh well.
rating: 4 of 5 stars
Detailed biography based on Nelson’s correspondence, gives a good picture of Nelson as a person, his relationships with superiors, subordinates, his common-law wife Emma Hamilton and her husband. This book is widely regarded as one of the best Nelson biographies, but I would have liked to see a bit more on strategy and tactics of the battles themselves – as it is, the sheer number of anguished letters home for love, money and fame can be a bit overwhelming, though it gives a good indication of all the myriad things a captain and admiral had to deal with.
Great biography, but a little discipline and tightening up here and there wouldn’t hurt.
I was delighted when I found this video, where James May (the cerebral third of Top Gear) talks to professor Alan Smeaton of Dublin City University about lifelogging – the recording of everything that happens to a person over a period of time, coupled with the construction of tools for making sense of the data.
In this example, James May wears a Sensecam for three days. The camera records everything he does (well, not everything, I assume – if you want privacy, you can always stick it inside your sweater) by taking a picture every 30 seconds, or when something (temperature, IR rays in front (indicating a person) or GPS location) changes. As it is said in the video, some people have been wearing these cameras for years – in fact, one of my pals from the iAD project, Cathal Gurrin, has worn one for at least three years. (He wore it the first time we met, where it snapped a picture of me with my hand outstretched.)
The software demonstrated in the video groups the pictures into events, by comparing the pictures to each other. Of course, many of the pictures can be discarded in the interest of brevity – for instance, for anyone working in an office and driving to work, many of the pictures will be of two hands on a keyboard or a steering wheel, and can be discarded. But the rest remains, and with powerful computers you can spin through your day and see what you did on a certain date.
And here is the thing: This means that you will increasingly have the option of never forgetting anything again. You know how it is – you may have forgotten everything about some event, and then something – a smell, a movement, a particular color – makes you remember by triggering whatever part (or, more precisely, which strands of your intracranial network) of your brain this particular memory is stored. Memory is associative, meaning that if we have a few clues, we can access whatever is in there, even though it had been forgotten.
Now, a set of pictures taken at 30-second intervals, coupled together in an easy-to-use and powerful interface, that is a rather powerful aide-de-memoire.
Forgetting, however, is done for a purpose – to allow you to concentrate on what you are doing rather than using spare brain cycles in constant upkeep of enormous, but unimportant memories. For this system to be effective, I assume it would need to be helpful in forgetting as well as remembering – and since it would be stored, you would actually not have to expend so much remember things – given a decent interface, you could always look it up again, much as we look things up in a notebook.
Think about that – remembering everything – or, at least being able to recall it at will. Useful – or an unnecessary distraction?
Virginia Postrel makes an excellent point in this article in the Atlantic: The US health care system, for all its flaws, drives research forward in a way that no other country can do:
Looking at the crazy-quilt American system, you might imagine that someone somewhere has figured out how to deliver the best possible health care to everyone, at no charge to patients and minimal cost to the insurer or the public treasury. But nobody has. In a public system, trade-offs don’t go away; if anything, they get harder.
The good thing about a decentralized, largely private system like ours is that health care constantly gets weighed against everything else in the economy. No single authority has to decide whether 15 percent or 20 percent or 25 percent is the “right” amount of GDP to spend on health care, just as no single authority has to decide how much to spend on food or clothing or entertainment. Different individuals and organizations can make different trade-offs. Centralized systems, by contrast, have one health budget. This treatment gets funded, and that one doesn’t.
In other words, markets drive innovation – sometimes in directions not deemed to be in the (whole) public’s interest, such as plastic surgery – in a way centralized coordination cannot. It is wasteful, but effective. And somewhere in the world there needs to be some slack for new things to come up, which can then be cost-effectively (or, rather, cost-efficiently) be implemented other places.
Which reminds me of Peter Drucker’s statement: "There is nothing so useless as doing efficiently that which should not be done at all." Not unknown in public health care systems…
(Via John Tierney)
Effective creativity is often accomplished by copying, by the creation of certain templates that work well, which are then changed according to need and context. Digital technology makes copying trivial, and search technology makes finding usable templates easy. So how do we judge creativity when combintations and associations can be done semi-automatically?
One of my favorite quotes is supposedly by Fyodor Dostoyevsky: "There are only two books written: Someone goes on a journey, or a stranger comes to town." Thinking about it, it is surprisingly easy to divide the books you have read into one or the other. The interesting part, however, lies not in the copying, but in the abstraction: The creation of new categories, archetypes, models and templates from recognizing new dimensions of similarity in previously seemingly unrelated instances of creative work.
Here is a demonstration, fresh from Youtube, demonstrating how Disney reuses character movements, especially in dance scenes:
Of course, anyone who has seen Fantasia recognizes that there are similarities between Disney movies, even schools (the "angular" once represented by 101 Dalmatians, Sleeping Beauty and Mulan, and the more rounded, cutesy ones represented by Bambi, The Jungle Book and Robin Hood. (Tom Wolfe referred to this difference (he was talking about car design, but what the heck, as Apollonian versus Dionysian, and apparently borrowed that distinction from Nietsche. But I digress.)
This video, I suspect, was created by someone recognizing movements, and putting the demonstration together manually. But in the future, search and other information access technologies will allow us to find such dimensions simply by automatically exploring similarities in the digital representations of creative works – computers finding patterns were we do not.
One example (albeit aided by human categorization) of this is the Pandora music service, where the user enters a song or an artist, and Pandora finds music that sounds similar to the song or artist entered. This can produce interesting effects: I found, for instance, that there is a lot of similarity (at least Pandora seems to think so, and I agree, though I didn’t see it myself) between U2 and Pink Floyd. And imagine my surprise when, on my U2 channel (where the seed song was Still haven’t found what I’m looking for) when a song by Julio Iglesias popped up. Normally I wouldn’t be caught dead listening to Julio Iglesias, but apparently this one song was sufficiently similar in its musical makeup to make it into the U2 channel. (I don’t remember the name of the song now, but remember that I liked it.)
In other words, digital technology enables us to discover categorization schemes and visualize them. Categorization is power, because it shapes how we think about and find information. In business terms, new ways to categorize information can mean new business models or at least disruptions of the old. Pandora has interesting implications for artist brand equity, for instance: If I wanted to find music that sounded like U2 before, my best shot would be to buy a U2 record. Now I can listen to my Youtube channel on Pandora and get music from many musicians, most of whom are totally unknown to me, found based on technical comparisons of specific attributes of their music (effectively, a form of factor analysis) rather than the source of the creativity.
I am not sure how this will work for artists in general. On one hand, there is the argument that in order to make it in the digital world, you must be more predictable, findable, and (like newspaper headlines) not too ironic. On the other hand, is that if you create something new – a nugget of creativity, rather than a stream – this single instance will achieve wider distribution than before, especially if it is complex and hard to categorize (or, at least, rich in elements that can be categorized but inconclusive in itself.)
Susan Boyle, the instant surprise on the Britain’s Got Talent show, is now past 20 million views on Youtube and is just that – an instant, rich and interesting nugget of information (and considerable enjoyment) which more or less explodes across the world. She’ll do just fine in this world, thank you very much. Search technology or not…
Roy Youngman, at a recent nGenera teleconference, told this story about how lean times forced a company to think creatively. As he very rightly says in the beginning:
Some people think that “do more with less” means make people work harder to compensate for the people that are let go. Other people think that “do more with less” means “work smarter, not harder”. If you think about it, both of these perceptions are rooted in a fundamental assumption that your existing operation is basically inefficient – that you have people wasting time or you have people working on the wrong things or you have people following bad processes. Depending upon your state of organizational maturity, all this may be true in which case you can “do more with less” by asking fewer people to work both harder and smarter. But good stewards of owner equity should always be trying to eliminate operational inefficiencies at all times, both good and bad. So what do you do if your people are already working hard, smart, and on the right things with the challenge of “do more with less”?
There is only one answer: Innovate!
I’ll expand on my own ideas for how to do this in another blog post – but in the meantime, read Roy’s story of how to do deliver a data warehousing solution without spending much money.
rating: 4 of 5 stars
Funny little ditty about Antoine, whose seeks to lighten the burden of his intelligence by willfully becoming stupid, with hilarious results. He tries becoming an alcoholic, suicide, and ends up using drugs to lower his intelligence and increase his financial fortune. Less fun for the storyline, which jumps here and there, than the paradoxical language and satirical exaggerations. Needs rereading to get it all, but then again, I think I have achieved part of what Antoine is trying to do…
rating: 5 of 5 stars
This is an important book, of huge ambition and with a breathtaking canvas, though it occasionally, particularly towards the end, fails to quite live up to its ambitions. It has divided reviewers in every country it has been published (first written in French, relatively late translated into English.) I come down on the side of those who like it – or, rather, who admires the book while feeling rather nauseated by its contents.
Jonathan Littell has attempted to write the equivalent of Claude Lanzmann’s Shoah – but from the perspective of the perpetrators. the books protagonist (it is written in the first perspective) is Dr. Maximilian Aue, a fictional SS-officer with an intellectual mind and an extremely complex and warped character, who is writing his autobiography with a dire warning to the reader: While he has done despicable things, who can say they wouldn’t do the same, if they had grown up like him and been put in the same circumstances?
Aue joins the SS for practical reasons and gradually descends into the cesspool that was Nazi Germany, rising in the ranks and with increasingly deviant mind. On his way, he works with the Einsatzkommandos in Poland and Caucasus, narrowly escapes Stalingrad, inspects Auschwitz with a view to improving its efficiency, participates in the murder of the Hungarian Jews, and finally takes part in the fall of Berlin, always with an intellectual detachment and a cool rationalizations. A thinking SS man with powers to explore and observe, he is without will or ability to do something other than excel at his job. He is saddened by the killings but more appalled by the lack of a scientific basis for deciding who is Jewish and who is not (The description of a conference in Caucasus discussing whether a certain group is Jewish or not is obscene in its similarity to any other scientific debate, coolly trying to determine whether 6,000 people should be exterminated or not, ending with the Wermacht blocking the extermination because they want to avoid the local population joining the partisans.) He deplores the treatment of the prisoners less for the suffering than for the reduction in productive capacity it causes: When he tries to obtain clothing and food for evacuating Auschwitz prisoners, it is not for humanitarian reasons, but because he has orders to use them as a workforce.
Almost as a subplot (and less believable) are the civilian experiences of the main person: Imprinted with an incestuous love for his sister, he is unable to engage with women and instead seeks out homosexual partners to act out his fantasies of his sister. As he sinks deeper and deeper, his veneer of civilization scrapes off and he loses himself in amnesic episodes, including one where he probably kills his mother and stepfather. After that, he is followed by two policemen who, like a constantly reappearing conscience (much like the chorus in a classic Greek play), calls him to justice. It all comes together in the end, with the fall of Berlin and the fall of Aue – though he survives, escapes to France, and settles down as a minor industrialist. Aue is a reprehensible, but fascinating character – he is a Nazi, but this is not rank stupidity of a Frank Suchomel (a Treblinka prison guard interviewed in Shoah) speaking, this is a cultured German with a wide intellectual foundation and some pretty screwed up, seemingly internally consistent, ideas about the world, capable of enjoying music but, significantly, not playing it.
The book has been criticized for being overly long, for being sensationalistic (explicit sex and rape scenes) and for being written in an old-fashioned language. I disagree completely: The length of the book and its myriad of people assure that you forget some of them – an important reminder of the enormity of the crime described. People die like flies around Dr. Aue, and after a while you, along with him, hardly notice it anymore, aside from some single individuals (such as a 13 year old Jewish piano prodigy executed after hurting his hand and therefore not being able to play) that penetrate the protective shield the perpetrators erect around themselves. I used to work at a hospital many years ago, and recognize this protective shield and the holes in it: To function and be able to take care of patients, I had to make myself immune to their suffering – but occasionally, some single patient would break through my defenses, usually because I somehow could identify with them. Jonathan Littell, who has worked with aid organizations alleviating hunger in war-torn areas, seems to write from that perspective – but Dr. Aue does not heal people, he kills them.
The book contains a number of diversionary discussions – on the languages of Caucasus, on the psychology of increasingly sadistic prison guards (they hit the prisoners not because they see them as subhuman, but because the prisoners persist in being human however much they are humiliated), on the Kantian imperative (in a discussion with Adolf Eichmann, no less), on the differences and many similarities between Communism and Nazism. The book is also a study in bureaucracy and how to do projects that look good to your superiors even though the subject is execrable and the results, in the end, the same: The discussions on how many calories each prisoner should have, how much would disappear through theft, and to what extent one should reduce rations to weak prisoners in order to make the die faster seem surreal if it wasn’t for how it would sound like any other bureaucratic hearing if the subject was changed. Dr. Aue gets better at shaping his reports and steering them through the bureaucracy, but he also loses sight of the real impact of what he is doing, if he has ever had it.
Jonathan Littell has derived his knowledge from books and from visits to the sites of many atrocities, and the book is historically accurate (with the stamp of approval from none less than Claude Lanzmann himself.) Aue meets many historical figures, and you can sometimes see (or think you see) influences from other books. The Communism-Nazism discussion reminds me of Pinker’s The Blank Slate, Dr. Aue’s reflections over word dead in many languages of how Robert Jordan reflects on death in Hemingway’s For whom the bells toll, the description of the dead hippo floating in a pool in Tiergarten with an artillery shell in its back is straight out of Antony Beevor’s The Fall of Berlin 1945. There is recurring symbolism with birds representing pure but vulnerable beauty: Ducks ("with beautiful green necks") are noted in reflective moments, Aue goes shooting with Albert Speer, cranes escape Berlin "not knowing how lucky they are." Overall, The Kindly Ones reminds me most of Günter Grass’ Die Blechdrommel – and Grass, almost predictably, had war experiences he carefully kept secret.
This is not a book to like, but to admire, because you gradually become fascinated with Dr. Aue despite his abominations. As he says in the beginning: How do you know you wouldn’t do the same, given the same upbringing and the same environment? Inhabitants of Jugoslavia, Darfur, Cambodia, Chechen, and Rwanda certainly would have no problem answering that question. Those of us living in more civilized societies should perhaps count ourselves deliriously happy we have never needed to confront it.
rating: 2 of 5 stars
Extremely violent and dystopic, but fantastic writing, where the character reveal themselves through dialogue and quiet observations. This book has been highly praised by critics and turned into a movie, but for me it tipped over a bit – there is such a thing as too much blood and cruelty, even if it is painted with economic strokes and brilliant contrasts.
A few years ago, I wrote an essay about how Microsoft had become the new IBM – i.e., the dominant, love-to-hate company of the computer industry. In this interesting article, John Lanchester discusses how Google now is stepping into that role, with its aggressive moves into making the world searchable, and a lot more than you would like findable. Interesting point:
[...] as Google makes clear, nothing short of a court order is going to stop it digitising every book in print. Google doesn’t accept that that constitutes a violation of copyright. But the company won’t even discuss the physical process by which it scans the books: a classic example of how very free it is with other people’s intellectual property, while being highly protective of its own.
This issue, in all its various forms, isn’t going to go away. Book Search, Street View and many of Google’s other offerings simply bulldoze existing ideas of how things are and how they should be done. I was highly critical of Gmail when it first came in, on the grounds that the superbly effective mail system came at the unacceptable price of allowing Google to scan all emails and place text ads. But I soon began using it, because it was free, and because it’s such good software, and because I frankly never noticed the ads.
He goes on to show how a hard disk crash and a botched backup restore left him without his documents, until it dawned on him that, yes, Gmail had them all, ready for download. So big brothers can be nice, but they are still Big Brothers…
I would love to have a set of noise-canceling head phones that could filter out bureaucratese and administrative noise from academic and other meetings, so that only relevant and interesting information reaches the wearer’s ears.
(Yes, I initially sent this to some collaborators as an April Fool’s joke. But eventually, this could really be done.)
As an academic and a technologist, I inevitably have to sit through many meetings of a bureaucratic nature, characterized by a low information signal-to-noise ratio, slow tempo and endless repetitions. As Brad Delong has described it, "an academic meeting is not over when everything has been said, but when everything has been said by everyone."
Imagine a collaboration with a good search technology company, such as FAST (now Microsoft) and a good headphone company, such as BOSE. Noise-canceling headphones work by recording the ambient sound picture and then filtering out noise (characterized by an irregular wave pattern), only letting well-modulated sound waves, such as voices and music, through.
It is a small step to strengthen this filtering by using advanced search technologies such as sentiment analysis, which applies automated semantic analysis to words and phrases. It is now mostly used to automatically evaluate blog comments, but it could be used directly on the audio patterns coming in, perhaps initially using speech-to-text conversion. Since administrative and bureaucratic language is characterized by many easily recognizable phrases and a high degree of repetition, it should lend itself well to filtering both in an initial phase and through collaborative techniques (easily implementable with a red "banish" button on the head phones themselves.) Personalization could also add value, by filtering out stuff you have heard before and only letting through things that are new to you.
Response time might be a problem, but professors are deemed to be a bit slow in their reaction to external stimuli anyway, so I doubt if anyone would notice any difference.
(Initial responses from my collaborators suggested dealing with this by skipping the meetings altogether, which I must admit is an attractive alternative. But not everyone can do that, and besides, there is always the chance that something might slip through the filter.) And imagine the market opportunities, for students, journalists, politicians, parents (at PTA meetings). Not to mention how this would put the final nail in the TV advertising coffin. I suppose seeing a movie such as Groundhog Day would be hard, but personalization would eventually fix that.
Ah, the dreams of reason…
Collaborative platforms are all the rage at the moment – every company wants one, has one, lives and dies by one. Cisco’s CEO John Chambers blogs, Michael Dell is on Twitter, Microsoft is selling Sharepoint by the truckload (well, figuratively speaking) and every software company in the world is busy putting 2.0 behind their offering, from backup to presentations.
I am worrying that all these platforms will lead to less collaboration, not more.
First, a personal observation: I am what Malone and Rockart in 1991 termed an intellectual mercenary. That is, I think of myself as a company of one, working for many organizations, but I am never member of only one community, and never, for a number of reasons, a full member of one. Sure, I have been on the faculty of the same business school since 1996 and had relationships with more or less the same set of people in the consulting world since 1994, and currently I am in year three of what I hope will be an 8 year research project on information access technologies. But that doesn’t change the fact that I am not a full member, at least not technically speaking, of any one of them.
My base job, as an academic, has a technical infrastructure geared towards a physical presence at the campus (at least on a regular basis) and a lack of visibility outside campus. The school has an outdated infrastructure, but since most of the faculty thinks this is just fine, since it works, few things change. So I have to have my own web site and email to project a less antiquated face for the rest of the world. Fine. Then I work with two companies (one mostly in the States and one mostly in Norway) on various projects. In case of their technical infrastructure, it is more modern, but tightly integrated around a different platform than what my main place of work is using.
The interesting thing, of course, is that as long as communication took place via email and collaboration was done sending Word documents around, everything was hunky dory – I could use whatever I wanted. Now each collaboration partner has their own collaborative platform, with integrated calendaring, Twittering, email, directories and Turing knows what else – and I find myself fighting new user interfaces every time I need to do something.
Software evolves from application to platform to standard. The problem is that we do not yet have standards for collaborative activities, only for the end results of those activities: Reports, teleconferences, single emails, and presentations. If you want integration, you have to belong to one organization, and that organization only. Which is fine for most people, but not for those of us who wants to contain multitudes, and do.
At the current state of collaborative software, it strengthens intra-organizational collaboration but weakens inter-organizational collaboration. We are back to the days when some companies used Wordperfect and some Word, and everyone fiddled with translations between them. Now we have to find ways to maintain a personal creative space (in my case, Evernote, Word, and Windows Live Writer) while injecting and extracting the results from various collaborative platforms. I find myself yearning for something that will maintain my collaborative activities in much the same way Live Writer (along with Live Sync, the best product Microsoft has come up with in a decade) allows me to suck down and load up posts to my various blogs. (A bonus would be if you could update the various Wikipedia articles you care about as well, but I digress).
An alternative is a shared platform, such as Google Docs, but again, that forces you to work in a different interface (though it is very similar to Word), does not bring the work inside your own space (where you are reminded about doing it) and forces everyone else to move out of their space. What we need here is some serious standard work in XML, and a recognition by the platform providers that a substantial amount of collaboration (and, I suspect, much of the innovation) comes from those that jump between platforms.
So, here is my message to the collaborative platform vendors: Tear down the walls before you have erected them! Do it by offering APIs or facilitating cross-platform synchronization. While we are at it, some software company with a stake in keeping their operating systems dominance should probably take me up on creating a cross-platform personal collaboration client.
I want my PCC revolution now!
These exercises looks like just the thing, if I can only get into the habit.