Category Archives: Digital reflections

social Media, disruption, and Comic Sans

I gave a presentation at the RECORD seminar in Oslo today, with the title “Social media as a disruptive innovation”. The title was given to me – I do not think social media is much of a disruptive technology (with the possible exception of headhunting) but certain aspects of it, such as its ability to generate metadata and thereby organize information, certainly are.

Incidentally, speaking at an architecture and design school is always interesting – I feel like I really ought to shave my head, get a Steve Jobs mock turtleneck and a fake Mac to blend in with the natives. On Twitter, most comments (#recordseminar) were about my use of Comic Sans, which I do because a) I like it, and b) it is fun using it in front of designers since they are so predictably irritated by it.

7 hour train journey via Bittorrent

The Norwegian broadcaster NRK recently made a 7 hour program about the very scenic train journey from Bergen to Oslo. The program was hugely successful despite the rather slow subject, offering long views from the front of the train interspersed with interviews and various other happenings along the ride). Here is a selection:

The raw film from the front camera is now being offered as a free Bittorrent download under a CC license. There is even a competition (in Norwegian only) for best reuse of the footage.

Kudos to the people behind NRK Beta, the experimental part of NRK, who again come up with interesting ways of making their material available!

Update 20.12: Boingboinged!

How about donating to Wikipedia?

Wikipedia Affiliate ButtonAs the holidays come up, how about making a donation to Wikipedia? The canonical Internet encyclopedia has no other income than donation, and need money for running technical and other costs.

A donation to Wikipedia – no matter how small – ensures that you will still have access to one of the world’s most complete and updated sources of knowledge. It is also a way to support a project which goal is to provide all the world’s knowledge to all the world’s people, in a form and with an interface that permits everyone to use and enhance it.

I am convinced that Wikipedia today is the single most influential collection of knowledge available, and the one that helps the most people, be they pupils, students, knowledge workers or anyone without access to the knowledge and learning infrastructure we in the richer and more liberal parts of the world take for granted. 350 million people go to Wikipedia to find neutral and detailed knowledge about the world we live in. Do your part so that it can be sustained and evolved further!

(Incidentally, it is really simple, as well. Credit cards accepted. Easily)

Cringely bows out (with predictions)

Bob Cringely has written his last column for PBS, and bows out after 11 years. I for one will miss his long, mostly insightful and always readable columns. He predictably comes up with ideas that are different from what other pundits write, is frequently wrong (4 for 11 in the prediction market is not wonderful, exactly) but always interesting.

And I do like his latest prediction: That VCs will channel money into starting small banks that can extend credit to the very creditworthy companies currently cash-strapped because most of the incentives and the focus is on mortgages. Might not happen, but deserves to.

Bob will, of course, not stop writing (he has his own website, of course, like any professional tech writer) but I particularly like the long essays he has been posting at PBS.org and hope he will continue that format, in some highly visible channel.

Yet another argument for CC books

IMG_3019 I bought Cory Doctorow’s Little Brother sometime this Spring, don’t quite remember where (think it was Amazon). When I read it, I discovered that half of page 198 was torn out. This is rather irritating, especially when you are into the book and would like to continue. Previously, my two solutions would have been: Skip the missing page(s) and continue reading; or exchange the book for a new one (which entails taking a break until the new book is back.)

This time, I just found Cory’s full text on craphound.com and printed those pages out on my printer. Then I left them in the book. I saved time and kept the continuity, the bookseller saved time and money not having to exchange my book, and the publisher saved the cost of another copy.

Simple, isn’t it?

No need to panic

Salon has an excellent interview with Dennis Baron, the author of A Better Pencil: Readers, Writers, and the Digital Revolution (and has a really interesting blog frontpage). His thesis is that the current panic about the Internet making us dumber or destroying the ability to write etc., etc., is a repeat of similar panics caused by previous innovations in content production and not to be worried about. From Plato (on writing) to Wodehouse (who certainly didn’t like dictaphones), pundits have obsessed about how the technology shapes our thinking and foreseen doom and gloom unless we return to the good old days, be it with handwriting or correction fluid.

This, too, shall pass.

(via BoingBoing)

Our search-detected personalities

Personas is an interesting project at the Media Lab which takes your (or anyone else’s) name as input and then determines our personalities based on what it finds about us on the web, generating a graphical representation. This is my result:

image

…which I found rather disturbing: Fame, sports and religion seems to take way to much space here. The reason, of course, is that my name is rather common in Norway, and, for example, a formerly well known skier skews the results, even though I seem to be the most web-known person with that name.

Anyway, if you have a rare name, it might be accurate – and if your name is John Smith, you might be left with an average, possibly tilted a bit towards Pocahontas:

image

Anyway – try it out. You might be surprised. And please remember – this is an art project, not an accurate representation of anything…

Update September 20:I somehow forgot to point to Naomi Haque’s blog post about Personas, with discussion of how social networking changes our perception of self.

GRA6821 Fifth lecture: Technology in value networks

(Update: Moved to October 2nd. Note assignment)

In this lecture, we will continue to investigate value networks and how technology plays a part in establishing a company that mediates between customers – be it a telephone company or a Facebook, a bank or Craigslist.

Please read and be prepared to discuss:

Further reading (for the specially interested):

  • Benkler, Y. (2006). The Wealth of Networks: How Social Production Transforms Markets and Freedom. New Haven, Yale University Press. Available as a wiki at benkler.org
  • Shirky, C. (2009). Here comes everybody.

Study questions to aid your preparation:

  1. What is the ownership structure of Schibsted – and what are the implications of it – for the strategic outlook?
  2. Visit Google News – is this type of service a threat to Schibsted? Why or why not?
  3. How does Google’s business model differ from Schibsted’s?
  4. What implications do the last statement – about the cathedral or stock market approach – have for Schibsted’s future?

Assignment 2, to be handed in in Blackboard before October 1 at 2000:

Write a short memo to Kjell Aamot and explain to him why he should (or should not) allow the other parts of Schibsted (such as Sesam) crawl finn.no’s ads. Maximum 400 words, use of theory and good examples important. NOTE: Please limit the discussion to things that are in the case, and at the time of the case. Things have changed at Schibsted later – but that discussion we will take in class.

I am looking forward to a lively discussion and interesting assignments.

(For a list of all the classes, see here.)

End user computing as vision and reality

My esteemed colleague and similarly jaded visionary Vaughan Merlyn has written rousing call for a new vision for the IT organization. While I do agree with everything he says – in principle – I think we still have a long way to go before the nitty gritty part of IT has moved from server room to cloud, leaving the users to creatively combine and automate their world in cooperation with their colleagues, customers and suppliers. While I do agree that the IT organization is better served by helping users help themselves than do their work for them, I am not sure all the users out there are ready to fish for themselves yet, no matter how easy to use the search engines, social communities and systems implementations tools become.

The enabling vision is not a new thing. I remember a video (or, rather film) from IBM from the mid-80s about End User Computing – a notion that the role of IT was to provide the tools for end users, and then they could build their own systems rather than wait for IT to build for them. (This, incidentally, was also the motivation behind COBOL in the 70s: The language was supposedly so intuitive that end users would be able to describe the processes they wanted automated directly into the computer, obviating the need for anyone in a white coat.) The movie showed an end user (for some reason a very short man in a suit) sitting in front of a 3270 terminal running VM/CMS. Next to him was a friendly person from the EUC group explaining how to use the friendly terminal, which towered over the slightly intimidated-looking end user like the ventilation shaft of an ocean liner.

It didn’t look very convincing to me. One reason for this was that at that time I was teaching (reasonably smart) business students how to do statistical analysis on an IBM 4381 and knew that many of them could not even operate the terminal, which had a tendency to jump between the various layers of the operating system and also had a mysterious button called SysRq, which still lingers, appendix-like, on the standard PC keyboard. Very few of those students were able to do much programming – but they were pretty good at filling in the blanks in programs someone already had written for them.

Of course, we now have gesture interfaces, endless storage, personal battery-powered devices and constant communication. But as the technology gets better, we cede more and more responsibility for how things work to the computer, meaning that we can use it until it breaks down (which it does) at which point we have no idea how things work. This is not the technology’s fault – it often contains everything you need to know to understand it rather than just use it. Take the wonderful new “computational search engine” Wolfram Alpha, for example: It can give you all kinds of answers to numerical questions, and will also (I haven’t seen it, but if the capabilities of Mathematica are anything to go by, it is great) allow you to explore, in a drill-down procedure, how it reached its answers.

This is wonderful – truly – but how many are going to use that feature? By extension: All of us have a spreadsheet program, but how many of an organization’s users can write a new spreadsheet rather than just use an already existing one?

For as long as I have worked with computers, each new leap in functionality and performance has been heralded as the necessary step to turn users from passive to active, from consumers of information to creators of knowledge. While each new technology generation, admittedly, has achieved some of this, it has always been less than was promised and much less than what was hoped for.

And so I think it is this time, too. Many people read Wikipedia, few write for it (though enough do). More importantly, many of Wikipedia’s users are unaware of how the knowledge therein is instantiated. Online forums have many more lurkers than contributors. And human ingenuity is unevenly distributed and will continue to be so.

So I think the IT department will continue to do what it is doing, in principle. It will be further from the metal and closer to the user, but as long as the world remains combinatorially complex and constantly changing, there will always be room for people who can see patterns, describe them, automate them and turn them into usable and connectable components. They will be fewer, think less of technology and more in terms of systems, and have less of a mismatch in terms of clothing and posture between themselves and their customers than before (much of it because the customers have embraced nerd chic, if not nerd knowledge).

The key for a continued IT career lies in taking charge of change rather than being affected by it. I think the future is great – and that we are still a long way from true end user computing. IT as a technology will be less interesting and more important in its invisible ubiquity. And Neal Stephenson’s analogy of a world of Elois and Morlocks, of the many that consume and the few that understand will still hold true.

I just hope I still will be a Morlock. With an Eloi pay and dress sense.

What if you could remember everything?

I was delighted when I found this video, where James May (the cerebral third of Top Gear) talks to professor Alan Smeaton of Dublin City University about lifelogging – the recording of everything that happens to a person over a period of time, coupled with the construction of tools for making sense of the data.

In this example, James May wears a Sensecam for three days. The camera records everything he does (well, not everything, I assume – if you want privacy, you can always stick it inside your sweater) by taking a picture every 30 seconds, or when something (temperature, IR rays in front (indicating a person) or GPS location) changes. As it is said in the video, some people have been wearing these cameras for years – in fact, one of my pals from the iAD project, Cathal Gurrin, has worn one for at least three years. (He wore it the first time we met, where it snapped a picture of me with my hand outstretched.)

The software demonstrated in the video groups the pictures into events, by comparing the pictures to each other. Of course, many of the pictures can be discarded in the interest of brevity – for instance, for anyone working in an office and driving to work, many of the pictures will be of two hands on a keyboard or a steering wheel, and can be discarded. But the rest remains, and with powerful computers you can spin through your day and see what you did on a certain date.

And here is the thing: This means that you will increasingly have the option of never forgetting anything again. You know how it is – you may have forgotten everything about some event, and then something – a smell, a movement, a particular color – makes you remember by triggering whatever part (or, more precisely, which strands of your intracranial network) of your brain this particular memory is stored. Memory is associative, meaning that if we have a few clues, we can access whatever is in there, even though it had been forgotten.

Now, a set of pictures taken at 30-second intervals, coupled together in an easy-to-use and powerful interface, that is a rather powerful aide-de-memoire.

Forgetting, however, is done for a purpose – to allow you to concentrate on what you are doing rather than using spare brain cycles in constant upkeep of enormous, but unimportant memories. For this system to be effective, I assume it would need to be helpful in forgetting as well as remembering – and since it would be stored, you would actually not have to expend so much remember things – given a decent interface, you could always look it up again, much as we look things up in a notebook.

Think about that – remembering everything – or, at least being able to recall it at will. Useful – or an unnecessary distraction?

Search and effectiveness in creativity

Effective creativity is often accomplished by copying, by the creation of certain templates that work well, which are then changed according to need and context. Digital technology makes copying trivial, and search technology makes finding usable templates easy. So how do we judge creativity when combintations and associations can be done semi-automatically?

One of my favorite quotes is supposedly by Fyodor Dostoyevsky: "There are only two books written: Someone goes on a journey, or a stranger comes to town." Thinking about it, it is surprisingly easy to divide the books you have read into one or the other. The interesting part, however, lies not in the copying, but in the abstraction: The creation of new categories, archetypes, models and templates from recognizing new dimensions of similarity in previously seemingly unrelated instances of creative work.

Here is a demonstration, fresh from Youtube, demonstrating how Disney reuses character movements, especially in dance scenes:

Of course, anyone who has seen Fantasia recognizes that there are similarities between Disney movies, even schools (the "angular" once represented by 101 Dalmatians, Sleeping Beauty and Mulan, and the more rounded, cutesy ones represented by Bambi, The Jungle Book and Robin Hood. (Tom Wolfe referred to this difference (he was talking about car design, but what the heck, as Apollonian versus Dionysian, and apparently borrowed that distinction from Nietsche. But I digress.)

This video, I suspect, was created by someone recognizing movements, and putting the demonstration together manually. But in the future, search and other information access technologies will allow us to find such dimensions simply by automatically exploring similarities in the digital representations of creative works – computers finding patterns were we do not.

One example (albeit aided by human categorization) of this is the Pandora music service, where the user enters a song or an artist, and Pandora finds music that sounds similar to the song or artist entered. This can produce interesting effects: I found, for instance, that there is a lot of similarity (at least Pandora seems to think so, and I agree, though I didn’t see it myself) between U2 and Pink Floyd. And imagine my surprise when, on my U2 channel (where the seed song was Still haven’t found what I’m looking for) when a song by Julio Iglesias popped up. Normally I wouldn’t be caught dead listening to Julio Iglesias, but apparently this one song was sufficiently similar in its musical makeup to make it into the U2 channel. (I don’t remember the name of the song now, but remember that I liked it.)

In other words, digital technology enables us to discover categorization schemes and visualize them. Categorization is power, because it shapes how we think about and find information. In business terms, new ways to categorize information can mean new business models or at least disruptions of the old. Pandora has interesting implications for artist brand equity, for instance: If I wanted to find music that sounded like U2 before, my best shot would be to buy a U2 record. Now I can listen to my Youtube channel on Pandora and get music from many musicians, most of whom are totally unknown to me, found based on technical comparisons of specific attributes of their music (effectively, a form of factor analysis) rather than the source of the creativity.

imageI am not sure how this will work for artists in general. On one hand, there is the argument that in order to make it in the digital world, you must be more predictable, findable, and (like newspaper headlines) not too ironic. On the other hand, is that if you create something new – a nugget of creativity, rather than a stream – this single instance will achieve wider distribution than before, especially if it is complex and hard to categorize (or, at least, rich in elements that can be categorized but inconclusive in itself.)

 Susan Boyle, the instant surprise on the Britain’s Got Talent show, is now past 20 million views on Youtube and is just that – an instant, rich and interesting nugget of information (and considerable enjoyment) which more or less explodes across the world. She’ll do just fine in this world, thank you very much. Search technology or not…

Google edging closer to being "the new Microsoft"

A few years ago, I wrote an essay about how Microsoft had become the new IBM – i.e., the dominant, love-to-hate company of the computer industry. In this interesting article, John Lanchester discusses how Google now is stepping into that role, with its aggressive moves into making the world searchable, and a lot more than you would like findable. Interesting point:

[…] as Google makes clear, nothing short of a court order is going to stop it digitising every book in print. Google doesn’t accept that that constitutes a violation of copyright. But the company won’t even discuss the physical process by which it scans the books: a classic example of how very free it is with other people’s intellectual property, while being highly protective of its own.

This issue, in all its various forms, isn’t going to go away. Book Search, Street View and many of Google’s other offerings simply bulldoze existing ideas of how things are and how they should be done. I was highly critical of Gmail when it first came in, on the grounds that the superbly effective mail system came at the unacceptable price of allowing Google to scan all emails and place text ads. But I soon began using it, because it was free, and because it’s such good software, and because I frankly never noticed the ads.

He goes on to show how a hard disk crash and a botched backup restore left him without his documents, until it dawned on him that, yes, Gmail had them all, ready for download. So big brothers can be nice, but they are still Big Brothers…

A product I would really like to see…

I would love to have a set of noise-canceling head phones that could filter out bureaucratese and administrative noise from academic and other meetings, so that only relevant and interesting information reaches the wearer’s ears.

(Yes, I initially sent this to some collaborators as an April Fool’s joke. But eventually, this could really be done.)

As an academic and a technologist, I inevitably have to sit through many meetings of a bureaucratic nature, characterized by a low information signal-to-noise ratio, slow tempo and endless repetitions. As Brad Delong has described it, "an academic meeting is not over when everything has been said, but when everything has been said by everyone."

Imagine a collaboration with a good search technology company, such as FAST (now Microsoft) and a good headphone company, such as BOSE. Noise-canceling headphones work by recording the ambient sound picture and then filtering out noise (characterized by an irregular wave pattern), only letting well-modulated sound waves, such as voices and music, through.

It is a small step to strengthen this filtering by using advanced search technologies such as sentiment analysis, which applies automated semantic analysis to words and phrases. It is now mostly used to automatically evaluate blog comments, but it could be used directly on the audio patterns coming in, perhaps initially using speech-to-text conversion. Since administrative and bureaucratic language is characterized by many easily recognizable phrases and a high degree of repetition, it should lend itself well to filtering both in an initial phase and through collaborative techniques (easily implementable with a red "banish" button on the head phones themselves.) Personalization could also add value, by filtering out stuff you have heard before and only letting through things that are new to you.

Response time might be a problem, but professors are deemed to be a bit slow in their reaction to external stimuli anyway, so I doubt if anyone would notice any difference.

(Initial responses from my collaborators suggested dealing with this by skipping the meetings altogether, which I must admit is an attractive alternative. But not everyone can do that, and besides, there is always the chance that something might slip through the filter.) And imagine the market opportunities, for students, journalists, politicians, parents (at PTA meetings). Not to mention how this would put the final nail in the TV advertising coffin. I suppose seeing a movie such as Groundhog Day would be hard, but personalization would eventually fix that.

Ah, the dreams of reason

Wolfram is at it again

Stephen Wolfram’s next project, the Wolfram|Alpha search "engine" (or, rather, answer to everything that is computable) is due out in May visit it here.) To me it seems like a combination of Google, CYC and, perhaps, Mathematica. It certainly is interesting and should do much for factual search, not to mention conversational interfaces to search. Nova Spivack thinks it is as important as Google. Doug Lenat (in the comment field to Spivack’s blog post) says

[…] it’s not AI, and not aiming to be, so it shouldn’t be measured by contrasting it with HAL or Cyc but with Google or Yahoo. At its heart is a formal Mathematica representation. Its inference engine is basically a large number of individually hand-engineered scripts for tapping into data which he and his team have spent the last several years gathering and "curating". For example, he has assembled tables of historical financial information about countries’ GDP’s and about companies’ stock prices. In a small number of cases, he also connects via API to third party information, but mostly for realtime data such as a current stock price or current temperature. Rather than connecting to and relying on the current or future Semantic Web, Alpha computes its answers primarily from his own curated data to the extent possible; he sees Alpha as the home for almost all the information it needs, and will use to answer users’ queries.

Another way of seeing it might be as the latest shot at providing answers by processing rather than storage – which fits nicely with Wolfram’s idea of computational equivalence – that the universe can be described by a simple set of rules, which as far as I understand it means that all complexity is only apparent, not real, and only so because we have not yet understood the underlying algorithms.

I just can’t wait to try it out – and to see what the impact will be on more storage-intensive search engines and their use.

Update March 12: This is garnering some serious attention for a service that isn’t even in beta yet…

Shannon, explained…

Peter Cochrane has a simple and very useful explanation of Claude Shannon’s mathematical law of communication, complete with diagrams. And a warning that, when it comes to technology, magic won’t work there, either.

We might thus imagine the energy of a signal dispersed inside such a solid form in the same way that water is retained by the skin of a balloon. We can change the shape of the balloon but the amount of water stays the same. Similarly, different coding and modulation schemes can alter the ratios of the sides presented by Shannon’s equation.

We can certainly trade off signal power against noise and/or bandwidth and time, but we can never exceed the bounds set by nature.

Liveblogging from Sophia Antipolis

This are my running notes from visiting Accenture’s Technology Labs in Sophia Antipolis, as part of a Master of Management program called "Strategic Business Development and Innovation" for the Norwegian School of Management.

Accenture’s Technology Labs is a relatively small organization: 200 researchers, 180000 employees in Accenture. There are four tech labs: Silicon Valley, Chicago (the largest), Sophia Antipolis, Bangalore, they should be able to do everything, but in practice there is specialization. The four main activities of the tech labs are technology visioning, research, development of specific platforms, and innovation workshops (with clients, press, consultants etc.) The themes pursued are mobility and sensors; analytics and insight; human interaction & performance; Systems Integration (architecture, development methods); and infrastructure (virtualization, cloud computing).

Kelly Dempski: Power Shift: Accenture Technology vision

The visioning used to be far-thinking, visionary etc., now have a much more immediate focus, want to look at things that you can implement today, make it much more "grounded in reality"

Eight critical trends:

  • 1: Cloud computing and SaaS: Hardware cloud (amazon.com, IBM, Google (now the third largest producer of servers in the world)), desktop cloud (Google, Zimbra, MS Office Live Workspace), SaaS cloud (Netsuite, CrownPeak, salesforce.com), and services cloud (Google Checkout, Amazon web services, eBay, Yahoo)
    • examples: Flextronics has changed over their HR applications to an SaaS model. AMD emulates chips on software for testing purposes, now contract with Sun to do that in the cloud. New York Times had 4Tb of articles that they wanted to translate to PDF: Translated it all twice (because there was a bug the first time), someone went on Amazon with their credit card, uploaded 4Tb, processed it (24h), there was a bug, had to do it again, 48h, total cost $250 on someone’s credit card.
    • issues:
      • data location (where is the data)
      • privacy and security
      • performance
  • 2: Systems – regular and lite
    • SOA as the integration paradigm (regular), mashups (lite)
    • traditional back-end apps vs. end-user apps
    • small number of apps maintained by CIOs vs. large number of User and user-group created applications (long tail)
    • examples:
      • REST is a light architectural approach for interoperability & data extraction
      • Mashups (JackMe (trading platform tools), Serena, Duet (SAP and Microsoft), IBM) becoming more important in the enterprise arena
      • Widgets and gadgets are light-weight desktop UIs that continually update some data
  • 3: Enterprise intelligence at scale
    • combination of internet-scale computing, petabytes of data, and new algorithms
    • almost all the large systems vendors have partnered with or acquired some analytics oriented software company (such as Microsoft acquiring FAST)
    • rampant use of data: evolution through access, reporting, external & internal, unstructured etc.
  • Trends 1-2-3 together: The new CIO
    • hardware and software procured from the cloud
    • business units, end-users create their own lightweight apps
    • The new CIO:
      • "Data Fort Commander" – ensure security, privacy, integrity of corporate data and manage back-end apps
      • "Chief Intelligence Officer" – provide data analysis services & insights to business units
  • 4: Continuous access
    • mobile device "first class" IT object
    • No concept of enterprise desktop/laptop
    • location-based services
  • 5: Social computing
    • amplify and support the value of the community
    • three major directions: Platformization, inter-operability, identity management
  • 6: User-generated content
    • community-based CRM (users making videos about how to run certain kinds of software or build something from IKEA)
    • new forms of entertainment
    • revenue erosion of traditional media companies
    • this has marketing implications: You can measure the sentiment out there in the user community. You switch from advertising to engaging.
  • 7: Industrialization of software development
    • converging trends will increase integration: Predictive metrics, model-driven development, domain-specific languages, service-oriented architecture, agile-development & Forever Beta.
  • 8: Green computing
    • global warming, energy prices, consumer pressure, compliance and valuation
    • switch out energy-intensive processes for information-intensive processes: Electronic collaboration; Warehousing, supply chain & logistics optimization; Smart factories, plants, buildings & homes; and new businesses such as carbon auditing and trading

Cyrille Bataller: Biometric Identity Management

Biometric identification is coming, driven by increasing demand and technological progress. Biometric identification is defined as "automated recognition of individuals based on their physiological and/or behavioral characteristics. Physiological can be face, iris, fingerprint; behavioral can be signature, voice, or walk. Involves a tradeoff, as with all security systems, between the level of security and the convenience of the system. Fingerprint is most used (38%), face is the most natural, iris the most accurate. Many others: Finger/hand vein, gait, ear shape, electricity, heat signature, hand geometry and so on…

Balance between FMR (false (positive identification) m rate) and FNMR, called equal error rate. Iris has an EER of .002%, 10 fingerprints .01%, fingerprint .4%, signature 3%, face recognition 6%, voice 8%. Many parameters in addition to this.

Securimetrix has something called HIIDE, a mobile unit that does a number of biometrics, used in Iran. Voice is very interesting because it can be done over the phone, interesting for call centers, banks etc. Multimodal important, because it is hard to spoof.

Airports is a good example of what you can do with proper identification: You can move 99.9% of the check-in away from the airport. Bag drop can also be almost fully automated. Portugal is the leader in the EU, have automated passport control with facial recognition (scan, use electronic passport etc.). Most people are not concerned very much with privacy given some assurance and convenience. Likely to see lost of automated border clearance for the masses, but also registered travelers that go through even quicker and are interoperable across many airports. One common misunderstanding is that automated identity checking is moving away from 100% accuracy, but human passport/security control is an error-ridden process and mostly automated processes are more accurate.

Antoine Caner: Next Generation Branch

This is a showcase exhibit of best practice banking technology and processes. This showroom has about 40 companies (banks, mostly) visits per year.

Most banks have a multi-channel strategy, have returned from a strategy of getting rid of branches but want to redefine it. Rather than doing low-value transactions, the branches are seen as a mesh network for business development.

Key principles behind the branch of the future:

  • generating and taking advantage of the traffic
  • flexibility throughout the day
  • adaptation to client’s value
  • sell & service oriented
  • modular space according
  • entertaining and attractive
  • focused on customer experience

Examples:

  • turning the branch windows into an interactive display (realty, for instance)
  • Bluetooth-enabled push information
  • swipe card at entrance to let branch know you are there, let your account manager know, apply Amazon-like features
  • digital displays for marketing
  • avatar-based teller services
  • biometric-based ATMs to allow for more advanced transactions, as well as more opportunistic sales applications
  • do both identification and authentication
  • digital pen user interface for capturing data from forms
  • RFID-based or NFC (Near Field Communication) in brochures, swipe and get info on screen
  • "interactive wall" for interaction with clients in information seeking mode
  • visual tracking of movement in the branch
  • modular office that can change shape during the day, reconfigurable furniture

What impressed me was not the individual applications per se – though they were impressive – but way everything had been put together, with a back-office application that can be used by the branch manager to track how this whole customer interface  (i.e., the whole bank branch) works.

Alexandre Naressi: Emerging Web Technologies

Alexandre leads the rich Internet applications community of interest within Accenture. He started off giving some background on Web 2.0 and used Flickr as an example of a Web 2.0 application, where a company use user-generated content and tagging to get network effects on their side. Important here is not only the user interface but also having APIs that allow anyone to create applications and to have your content or services embedded into other platforms. Dimpls is another example. More than one billion people have Internet access, 50% of the world has broadband access, which allows for richer applications. Customers’ behavior is changing – it is now a "read-write" web. It has also gotten so much cheaper to launch something: Excite cost $3m, JotSpot $200k, Digg cost $200.

Rich Internet Application and Social Software represent low-hanging fruit in this scenario. RIA allows the functionality of a fat client in a browser interface, with very rich and capable components for programmmers to play around with.

Two families of technologies: Jacascript/Ajax (doesn’t require a plugin, advocated by Google), and three different plugin-based platforms: Silverlight (Microsoft), Flash/Flex from Adobe, and JavaFX from Sun. All of them have offline clients that can be downloaded as well. A good example is Searchme.com, which gives a better user interface – Accenture has developed something similar for their internal enterprisesearch.

Social Software: Accenture has its own internal version of Facebook. Youtube is also a possible corporate platform where people can contribute screencasts of all kinds of interesting demos and prototypes.

Kirsti Kierulf: Nordic Innovation Model for Accenture and Microsoft

Accenture and Microsoft collaborating (own a company, Avanade, together), and have set up an Innovation lab in Oslo called the Accenture Innovation Lab on Microsoft Enterprise Search. Three agendas: Network services, enterprise search (iAD), and service innovation. Running a number of innovation processes internally. This happens on a Nordic level, so collaboration is with academic institutions and companies all over.

Have made a number of tools to support innovation methodologies: InnovateIT, InnovoteIT, and InnomindIT (mind maps), as well as a method for making quick prototypes of systems and concepts for testing and experimentation: 6 weeks from idea to test.

Current innovation models are not working for long-term, risky projects. Closed models do not work – hence, looser, more informal and open innovation models with shorter innovation cycles. Pull people in, share costs throughout the network, Try to avoid the funnel which closes down projects with no clear business case and NIH. Try to park ideas rather than kill them.

Important: Ask for advice, stay in the question, maintain relationships, don’t spend time on legalities and financials.