I attended a most stimulating workshop on e-innovation and ICT interoperability this weekend. The event was organised by the Research Center for Information Law of the University of St. Gallen and the Berkman Center for Internet and Society of the Harvard Law School.
It gathered together an incredibly eclectic group of 20 experts from the ICT industries, academia and key policy bodies.
The workshop was an essential part of a transatlantic research undertaking, which has taken upon the amibitious task of providing a deeper and more comperehensive understanding of the drivers and inhibitors of interoperability in a digital networked environment and how policy makers should/could approach them.
The challenge of this task is best depicted by John Palfrey of the Berkman Center:
"As with many of the other interesting topics in our field, interop makes clear the difficulty of truly understanding what is going on without having 1) skill in a variety of disciplines, or, absent a super-person who has all these skills in one mind, an interdisciplinary group of people who can bring these skills to bear together; 2) knowledge of multiple factual settings; and 3) perspectives from different places and cultures. While we’ve committed to a transatlantic dialogue on this topic, we realize that even in so doing we are still ignoring the vast majority of the world, where people no doubt also have something to say about interop. This need for breadth and depth is at once fascinating and painful".
Urs Gasser of the University of St.Gallen suggested a very interesting framework to meet the above challenge, which moves away from a substantive towards a more procedural approach.
Here are the proposed contours of such a framework:
In what area and context do we want to achieve interoperability? At what level and to what degree? To what purpose (policy goals such as innovation) and at what costs?
What is the appropriate approach (e.g. IP licensing, technical collaboration, standards) to achieve the desired level of interoperability in the identified context? Is ex ante or ex post regulation necessary, or do we leave it to the market forces?
If we decide to pursue a market-driven approach to achieve it, are there any specific areas of concerns and problems, respectively, that we - from a public policy perspective – still might want to address (e.g. disclosure rules aimed at ensuring transparency)?
If we decide to pursue a market-based approach to interoperability, is there a proactive role for governments to support private sector attempts aimed at achieving interoperability (e.g. promotion of development of industry standards)?
If we decide to intervene (either by constraining, leveling, or enabling legislation and/or regulation), what should be the guiding principles (e.g. technological neutrality; minimum regulatory burden; etc.)?
My personal account of the interoperability event is highly positive. The discussions were intense and the viewpoints contentious. The talks with the Microsoft, Intel and IBM experts gave me also some unique insights to the workings of the industry.
The organisation was perfect (thanks a lot, Richard) and the atmosphere, despite the mixture of different people, quite interoperably great.