Thursday, November 22, 2007

ict interoperability and einnovation

Here is a brief note on the release of a study on interoperability and einnovation, which gives an excellent analysis of all the pertinent issues and is a major contribution to this important (and in my opinion underresearched) topic. The study is a product of the colloborative work of the Berkman Center of Harvard Law School and the Research Center for Information Law at the University of St. Gallen with two key researchers involved - John Palfrey and Urs Gasser - whom I have the great pleasure to know and to have worked with.
The study comprises a White Paper and three case studies on ict interoperability and einnovation. The latter explore in turn digital rights management in online and offline music distribution models; digital identity systems; and web services.
The core finding is that increased levels of ict interoperability generally foster innovation. Interoperability contributes in addition to other socially desirable outcomes such as consumer choice, access to content and diversity.
As Urs points out the investigation reached other, more nuanced conclusions:
"Interoperability does not mean the same thing in every context and as such, is not always good for everyone all the time. For example, if one wants completely secure software, then that software should probably have limited interoperability. In other words, there is no one-size-fits-all way to achieve interoperability in the ICT context.
Interoperability can be achieved by multiple means including the licensing of intellectual property, product design, collaboration with partners, development of standards and governmental intervention. The easiest way to make a product from one company work well with a product from another company, for instance, may be for the companies to cross license their technologies. But in a different situation, another approach (collaboration or open standards) may be more effective and efficient.
The best path to interoperability depends greatly upon context and which subsidiary goals matter most, such as prompting further innovation, providing consumer choice or ease of use, and the spurring of competition in the field.
The private sector generally should lead interoperability efforts. The public sector should stand by either to lend a supportive hand or to determine if its involvement is warranted".

Against the above backdrop, the White Paper proposes a set of guidelines to assist businesses and governments in determining the best way to achieve interoperability in a given situation:
(i) identify what the actual end goal or goals are. The goal is not interoperability per se, but rather something to which interoperability can lead, such as innovation or consumer choice.
(ii) consider the facts of the situation. The key variables that should be considered include time, maturity of the relevant technologies and markets and user practices and norms.
(iii) in light of these goals and facts of the situation, consider possible options against the benchmarks proposed by the study: effectiveness, efficiency and flexibility.
(iv) remain open to the possibility of one or more approaches to interoperability, which may also be combined with one another to accomplish interoperability that drives innovation.
(v) in some instances, it may be possible to convene all relevant stakeholders to participate in a collaborative, open standards process. In other instances, the relevant facts may suggest that a single firm can drive innovation by offering to others the chance to collaborate through an open API, such as Facebook’s recent success in permitting third-party applications to run on its platform. But long-term sustainability may be an issue where a single firm makes an open API available according to a contract that it can change at any time.
(vi) In the vast majority of cases, the private sector can and does accomplish a high level of interoperability on its own. The state may help by playing a convening role, or even in mandating a standard on which there is widespread agreement within industry after a collaborative process. The state may need to play a role after the fact to ensure that market actors do not abuse their positions.

I consider the above guidelines succintly formulated and especially valuable for further research efforts in the field of interoperability and innovation. I am honoured to have been part of the discussions on these issues during the first interoperabilty workshop convened by Urs in Weissbad, Switzerland.

Friday, October 12, 2007

ec electronic communications and competition law




My phd thesis is finally out as a book with Cameron May.

It's entitled "EC Electronic Communications and Competition Law" and in essence attempts to answer the question of whether generic competition law rules can be a sufficient and efficient regulator of the electronic communications sector.

I argue that this question needs to be examined not in the conventional contexts of sector specific rules versus competition rules or deregulation versus regulation but in a broader governance context. The reader is provided with an insight into the workings of the communications sector which is exposed as being network-bound, converging, dynamic and endowed with a special societal function. This together with the scrutiny of the underlying regulatory objectives paints the most comprehensive picture of the in the communications sector and allows for a nuanced answer to the above question, and ultimately, for the designing of multi-faceted, hybrid regulatory toolbox.

The enquiry is based on the European Community competition rules and the current communications regime but the conclusions drawn are applicable to other regulatory environments as well. Some insights are also of interest today in the context of the net neutrality discussions, or more generally with regard to the question of how the regulation of infrastructure influences content flows.

digital natives project

Here is a just brief note to pay attention to the very intriguing project of Urs Gasser and John Palfrey called Digital Natives. It is a collaborative undertakng of the Berkman Center for Internet and Society at Harvard Law School and the Research Center for Information Law at the University of St. Gallen.
The project's objective is the obtainment of better understanding of the experience of those young people "born digital" with new digital media, such as the Internet, cell phones and related technologies. By gaining insight into how digital natives make sense of their interactions in this digital landscape, Urs and John want to address the issues these practices raise, learn how to harness the opportunities their digital fluency presents, and shape our regulatory and educational frameworks in a way that advances the public interest.
Key questions put forward are: (i) How do we take best advantage of the benefits of online identities while managing issues of privacy and safety? (ii) How can we envision intellectual property law that allows the exciting "rip, mix, burn" (and mash!) creativity and culture to thrive? (iii) How can we learn (and teach others) to best navigate the information overload we face in today's digital environment?
For more details, see Urs' blog where he elaborates further upon the fundamental ideas of the Digital Natives project on the occasion of the OECD-Canada Forum on the Participative Web that took place in Ottawa on 3 October 2007.
The broader context of the latter event is of specific interest to my own research of the changing digital media environment, including changing models of consumer and business behaviour, and their impact on governance models in general, and on the diversity of cultural expressions in particular.

Update 10 December 07: Here are some new thoughts of Urs and John on the ongoing book project shared with students at Harvard and St. Gallen.
And some comments by Henry Jenkins on what 'digital immigrants' are (may be).

Monday, June 11, 2007

symposium 'traditional cultural expressions in a digital environment', 8-9 June, Lucerne

This weekend, the time for our eDiversity international symposium on traditional cultural expressions (TCE) in a digital environment finally came.
After almost 6 months of careful planning and organisation, inviting experts, booking venues and menues, going through every detail of the programme, both logistically and substantively, the morning of the 8th of June arrived.
It was a beautiful day in Lucerne and a good start for a very interesting, open and intensive discussion on the TCE pertinent issues.
The debates were unique at least in a couple of points. The first one was the interdisciplinary character of the contributions and the diverse backgrounds of the speakers that sought to reach out to other disciplines (history, philosophy, social sciences and law) and add value to the TCE discussions. The second point of distinction had to do with situating the debates in the new digital environment and seeking to address its repercussions (both positive and negative) for the protection and promotion of TCE.
With the benefit of hindsight, although as an organiser not entirely impartial in my view, a third point worth mentioning is the interesting group of experts and their willingness for open dialogues and exchange of ideas (a rare thing, I would say).
Finally, a word on my own contribution on new technologies and their impact upon the protection and promotion of TCE (ppt here; paper here). My main objective was to reveal that the digital technologies do change the entire environment where TCE are to be protected and promoted. Digital technologies and their far-reaching economic and societal implications (using the long tail and the participative web as examples) cannot be exhausted in the TCE discussions by mere references to the negative impact of these upon copyright enforcement and ICTs instrumentalisation for development purposes. A broader conceptual understanding is needed. Upon the latter, one could then take the concrete steps of putting together a multi-faceted flexible toolbox, which may properly address the specificities of TCE beyond copyright (at lower transcation costs too).
I was extremely lucky to have Herbert Burkert of the University of St. Gallen as a formal discussant and Sacha Wunsch-Vincent of OECD as a moderator. They put things into the right perspective and draw the precise contours of the topic framing it into the symposium's objectives. I am most grateful for their contributions, in particular to Herbert, who not only supported my views but also challenged the public and livened up the debate with his rhetotical skills.

Here are some visual impressions from the TCE discussions.

creative industries workshop

A week ago I had the pleasure to visit a most interesting Exploratory Workshop on "Rethinking Added Value in the Creative Industries", convened by Christoph Weckerle of the Research Unit Creative Industries at the University of Art and Design Zurich (hgk Zurich, 29-31 May 2007). The event gathered together people interested in the creative industries but having diverse backgrounds and personal research agendas. The goal was to agree on some common ground definitions and future research objectives, which we could evetually take up in a research network funded by the European Science Foundation.
After some general examinations of the pertinent issues, it was clear that the wide variety of positions was rather difficult to reconcile, so we set more concrete tasks and worked on them in smaller gropus. With astonishingly promising results.
By the end of the first day, we could all agree on a list of topics that need to be taken up in order to properly assess the workings of the creative industries and how should they be efficiently and sustainably supported.
These topics were 6 and included in a random order (or the way I have them in my random notes):
(i) institutions, regulation and public policies (including culture and cultural diversity);
(ii) labour, individuals, skills;
(iii) organisation, firms, business models;
(iv) demand (broadly defined incl. users' reactions, user created content, etc.);
(v) localisation;
(vi) innovation, learning and technology.
/Brian Moeran convincingly and justly in my opinion insisted also on including 'values' as a cross-cutting category/.
On the second day, we (in 3 groups) elaborated the most important questions that one is to formulate within these 6 categories (quite extensive, so I won't put it up here).
It seems in the end that we were more successful than we were supposed to be at the level of an exploratory workshop. Perhaps that was thanks to the excellent organisation and atmosphere (working partly in the sun at the balcony of the hgk) provided by Christoph and his team.

For me as a lawyer, the workshop was a particularly fascinating experience to be surrounded by non-lawyers only :) and to test some of the set definitions legal scholars use in their strict legal analyses (or often poor attempts for interdisciplinarity). The notions of 'creative industries', 'cultural industries', 'culture', 'diversity', even 'law', had broader meanings in the workshop's discusssions, were less policy-laden but rather pragmatic in my view.
(Commodification of culture was certainly not a dirty word).

Sunday, March 11, 2007

paper on the new ec 'television without frontiers' directive

Last week I finished a piece I have been working on for some time now. It is on the revision of the EC audiovisual media regulation, in particular on the 'Audiovisual Media Services Directive', which will replace the infamous 'Television without Frontiers'. I examine the changes the new Communityact will bring about and their likely impact on cultural diversity in European media.
Here is an abstract of the paper:
In the profoundly changing and dynamic world of contemporary audiovisual media, what has remained surprisingly unaffected is regulation. In the European Union, the new Audiovisual Media Services Directive (AVMS), proposed by the European Commission on 13 December 2005, should allegedly rectify this situation. Amending the existing Television without Frontiers Directive, it should offer a 'fresh approach' and meet the challenge of appropriately regulating media in a complex environment. It is meant to achieve a balance between the free circulation of TV broadcast and new audiovisual media and the preservation of values of cultural identity and diversity, while respecting the principles of subsidiarity and proportionality inherent to the Community. The purpose of this paper is to examine whether and how the changes envisaged to the EC audiovisual media regime might influence cultural diversity in Europe. It addresses subsequently the question of whether the new AVMS properly safeguards the balance between competition and the public interest in this regard, or whether cultural diversity remains a mere political banner.
Here is the paper itself (published in an edited version in the International Journal of Cultural Property).

Friday, January 26, 2007

the 2006 neiman foundation report


Plenty of good things on convergence, blogging and grass root journalism - in Harvard University's Neiman Foundation for Journalism 2006 Report under the prophetic title of "Goodbye Gutenberg".

Thursday, January 25, 2007

impressions from Weissbad

Ville uploaded some pictures from the Weissbad eInnovation and Interoperability Workshop. /Thanks a lot, Ville/

Tuesday, January 23, 2007

a new book on community content

Herkko Hietanen of creative commons Finland has just published a book co-authored with Ville Oksanen and Mikko Välimäki (valuable contributors to the eInnovation workshop and fun dinner companions). The book provides an excellent overview of the law and policy of community created content - a domain, which is novel and a bit uncomfortable for our IPR-based thinking. The book is downloadable under the cc-license.
/good work, Ville and Mikko/

Monday, January 22, 2007

workshop on e-innovation and ict interoperability in the beautiful appenzell

I attended a most stimulating workshop on e-innovation and ICT interoperability this weekend. The event was organised by the Research Center for Information Law of the University of St. Gallen and the Berkman Center for Internet and Society of the Harvard Law School.
It gathered together an incredibly eclectic group of 20 experts from the ICT industries, academia and key policy bodies.
The workshop was an essential part of a transatlantic research undertaking, which has taken upon the amibitious task of providing a deeper and more comperehensive understanding of the drivers and inhibitors of interoperability in a digital networked environment and how policy makers should/could approach them.
The challenge of this task is best depicted by John Palfrey of the Berkman Center:
"As with many of the other interesting topics in our field, interop makes clear the difficulty of truly understanding what is going on without having 1) skill in a variety of disciplines, or, absent a super-person who has all these skills in one mind, an interdisciplinary group of people who can bring these skills to bear together; 2) knowledge of multiple factual settings; and 3) perspectives from different places and cultures. While we’ve committed to a transatlantic dialogue on this topic, we realize that even in so doing we are still ignoring the vast majority of the world, where people no doubt also have something to say about interop. This need for breadth and depth is at once fascinating and painful".

Urs Gasser of the University of St.Gallen suggested a very interesting framework to meet the above challenge, which moves away from a substantive towards a more procedural approach.
Here are the proposed contours of such a framework:
In what area and context do we want to achieve interoperability? At what level and to what degree? To what purpose (policy goals such as innovation) and at what costs?
What is the appropriate approach (e.g. IP licensing, technical collaboration, standards) to achieve the desired level of interoperability in the identified context? Is ex ante or ex post regulation necessary, or do we leave it to the market forces?
If we decide to pursue a market-driven approach to achieve it, are there any specific areas of concerns and problems, respectively, that we - from a public policy perspective – still might want to address (e.g. disclosure rules aimed at ensuring transparency)?
If we decide to pursue a market-based approach to interoperability, is there a proactive role for governments to support private sector attempts aimed at achieving interoperability (e.g. promotion of development of industry standards)?
If we decide to intervene (either by constraining, leveling, or enabling legislation and/or regulation), what should be the guiding principles (e.g. technological neutrality; minimum regulatory burden; etc.)?

My personal account of the interoperability event is highly positive. The discussions were intense and the viewpoints contentious. The talks with the Microsoft, Intel and IBM experts gave me also some unique insights to the workings of the industry.
The organisation was perfect (thanks a lot, Richard) and the atmosphere, despite the mixture of different people, quite interoperably great.