Monday, June 11, 2007

symposium 'traditional cultural expressions in a digital environment', 8-9 June, Lucerne

This weekend, the time for our eDiversity international symposium on traditional cultural expressions (TCE) in a digital environment finally came.
After almost 6 months of careful planning and organisation, inviting experts, booking venues and menues, going through every detail of the programme, both logistically and substantively, the morning of the 8th of June arrived.
It was a beautiful day in Lucerne and a good start for a very interesting, open and intensive discussion on the TCE pertinent issues.
The debates were unique at least in a couple of points. The first one was the interdisciplinary character of the contributions and the diverse backgrounds of the speakers that sought to reach out to other disciplines (history, philosophy, social sciences and law) and add value to the TCE discussions. The second point of distinction had to do with situating the debates in the new digital environment and seeking to address its repercussions (both positive and negative) for the protection and promotion of TCE.
With the benefit of hindsight, although as an organiser not entirely impartial in my view, a third point worth mentioning is the interesting group of experts and their willingness for open dialogues and exchange of ideas (a rare thing, I would say).
Finally, a word on my own contribution on new technologies and their impact upon the protection and promotion of TCE (ppt here; paper here). My main objective was to reveal that the digital technologies do change the entire environment where TCE are to be protected and promoted. Digital technologies and their far-reaching economic and societal implications (using the long tail and the participative web as examples) cannot be exhausted in the TCE discussions by mere references to the negative impact of these upon copyright enforcement and ICTs instrumentalisation for development purposes. A broader conceptual understanding is needed. Upon the latter, one could then take the concrete steps of putting together a multi-faceted flexible toolbox, which may properly address the specificities of TCE beyond copyright (at lower transcation costs too).
I was extremely lucky to have Herbert Burkert of the University of St. Gallen as a formal discussant and Sacha Wunsch-Vincent of OECD as a moderator. They put things into the right perspective and draw the precise contours of the topic framing it into the symposium's objectives. I am most grateful for their contributions, in particular to Herbert, who not only supported my views but also challenged the public and livened up the debate with his rhetotical skills.

Here are some visual impressions from the TCE discussions.

creative industries workshop

A week ago I had the pleasure to visit a most interesting Exploratory Workshop on "Rethinking Added Value in the Creative Industries", convened by Christoph Weckerle of the Research Unit Creative Industries at the University of Art and Design Zurich (hgk Zurich, 29-31 May 2007). The event gathered together people interested in the creative industries but having diverse backgrounds and personal research agendas. The goal was to agree on some common ground definitions and future research objectives, which we could evetually take up in a research network funded by the European Science Foundation.
After some general examinations of the pertinent issues, it was clear that the wide variety of positions was rather difficult to reconcile, so we set more concrete tasks and worked on them in smaller gropus. With astonishingly promising results.
By the end of the first day, we could all agree on a list of topics that need to be taken up in order to properly assess the workings of the creative industries and how should they be efficiently and sustainably supported.
These topics were 6 and included in a random order (or the way I have them in my random notes):
(i) institutions, regulation and public policies (including culture and cultural diversity);
(ii) labour, individuals, skills;
(iii) organisation, firms, business models;
(iv) demand (broadly defined incl. users' reactions, user created content, etc.);
(v) localisation;
(vi) innovation, learning and technology.
/Brian Moeran convincingly and justly in my opinion insisted also on including 'values' as a cross-cutting category/.
On the second day, we (in 3 groups) elaborated the most important questions that one is to formulate within these 6 categories (quite extensive, so I won't put it up here).
It seems in the end that we were more successful than we were supposed to be at the level of an exploratory workshop. Perhaps that was thanks to the excellent organisation and atmosphere (working partly in the sun at the balcony of the hgk) provided by Christoph and his team.

For me as a lawyer, the workshop was a particularly fascinating experience to be surrounded by non-lawyers only :) and to test some of the set definitions legal scholars use in their strict legal analyses (or often poor attempts for interdisciplinarity). The notions of 'creative industries', 'cultural industries', 'culture', 'diversity', even 'law', had broader meanings in the workshop's discusssions, were less policy-laden but rather pragmatic in my view.
(Commodification of culture was certainly not a dirty word).

Sunday, March 11, 2007

paper on the new ec 'television without frontiers' directive

Last week I finished a piece I have been working on for some time now. It is on the revision of the EC audiovisual media regulation, in particular on the 'Audiovisual Media Services Directive', which will replace the infamous 'Television without Frontiers'. I examine the changes the new Communityact will bring about and their likely impact on cultural diversity in European media.
Here is an abstract of the paper:
In the profoundly changing and dynamic world of contemporary audiovisual media, what has remained surprisingly unaffected is regulation. In the European Union, the new Audiovisual Media Services Directive (AVMS), proposed by the European Commission on 13 December 2005, should allegedly rectify this situation. Amending the existing Television without Frontiers Directive, it should offer a 'fresh approach' and meet the challenge of appropriately regulating media in a complex environment. It is meant to achieve a balance between the free circulation of TV broadcast and new audiovisual media and the preservation of values of cultural identity and diversity, while respecting the principles of subsidiarity and proportionality inherent to the Community. The purpose of this paper is to examine whether and how the changes envisaged to the EC audiovisual media regime might influence cultural diversity in Europe. It addresses subsequently the question of whether the new AVMS properly safeguards the balance between competition and the public interest in this regard, or whether cultural diversity remains a mere political banner.
Here is the paper itself (published in an edited version in the International Journal of Cultural Property).

Friday, January 26, 2007

the 2006 neiman foundation report


Plenty of good things on convergence, blogging and grass root journalism - in Harvard University's Neiman Foundation for Journalism 2006 Report under the prophetic title of "Goodbye Gutenberg".

Thursday, January 25, 2007

impressions from Weissbad

Ville uploaded some pictures from the Weissbad eInnovation and Interoperability Workshop. /Thanks a lot, Ville/

Tuesday, January 23, 2007

a new book on community content

Herkko Hietanen of creative commons Finland has just published a book co-authored with Ville Oksanen and Mikko Välimäki (valuable contributors to the eInnovation workshop and fun dinner companions). The book provides an excellent overview of the law and policy of community created content - a domain, which is novel and a bit uncomfortable for our IPR-based thinking. The book is downloadable under the cc-license.
/good work, Ville and Mikko/

Monday, January 22, 2007

workshop on e-innovation and ict interoperability in the beautiful appenzell

I attended a most stimulating workshop on e-innovation and ICT interoperability this weekend. The event was organised by the Research Center for Information Law of the University of St. Gallen and the Berkman Center for Internet and Society of the Harvard Law School.
It gathered together an incredibly eclectic group of 20 experts from the ICT industries, academia and key policy bodies.
The workshop was an essential part of a transatlantic research undertaking, which has taken upon the amibitious task of providing a deeper and more comperehensive understanding of the drivers and inhibitors of interoperability in a digital networked environment and how policy makers should/could approach them.
The challenge of this task is best depicted by John Palfrey of the Berkman Center:
"As with many of the other interesting topics in our field, interop makes clear the difficulty of truly understanding what is going on without having 1) skill in a variety of disciplines, or, absent a super-person who has all these skills in one mind, an interdisciplinary group of people who can bring these skills to bear together; 2) knowledge of multiple factual settings; and 3) perspectives from different places and cultures. While we’ve committed to a transatlantic dialogue on this topic, we realize that even in so doing we are still ignoring the vast majority of the world, where people no doubt also have something to say about interop. This need for breadth and depth is at once fascinating and painful".

Urs Gasser of the University of St.Gallen suggested a very interesting framework to meet the above challenge, which moves away from a substantive towards a more procedural approach.
Here are the proposed contours of such a framework:
In what area and context do we want to achieve interoperability? At what level and to what degree? To what purpose (policy goals such as innovation) and at what costs?
What is the appropriate approach (e.g. IP licensing, technical collaboration, standards) to achieve the desired level of interoperability in the identified context? Is ex ante or ex post regulation necessary, or do we leave it to the market forces?
If we decide to pursue a market-driven approach to achieve it, are there any specific areas of concerns and problems, respectively, that we - from a public policy perspective – still might want to address (e.g. disclosure rules aimed at ensuring transparency)?
If we decide to pursue a market-based approach to interoperability, is there a proactive role for governments to support private sector attempts aimed at achieving interoperability (e.g. promotion of development of industry standards)?
If we decide to intervene (either by constraining, leveling, or enabling legislation and/or regulation), what should be the guiding principles (e.g. technological neutrality; minimum regulatory burden; etc.)?

My personal account of the interoperability event is highly positive. The discussions were intense and the viewpoints contentious. The talks with the Microsoft, Intel and IBM experts gave me also some unique insights to the workings of the industry.
The organisation was perfect (thanks a lot, Richard) and the atmosphere, despite the mixture of different people, quite interoperably great.

Wednesday, December 20, 2006

the code is the law

I am still not entirely convinced that the 'code is the law' but code v2 by Lawrence Lessig definitely 'rules'. Code v2 is a sort of a collective update (through a wiki) of Lessig's original 'Code and Other Laws of Cyberspace', which was published in 1999.
The book is downloadable here and it can be construed (in terms of creation, copyright protection (under the creative commons), etc.) as the very epitome of web 2.0 in an academic context.

Friday, December 08, 2006

new important reports on IPRs

There is an excellent new report by Urs Gasser and Silke Ernst. It provides some recommendations on the implementation of the EU Copyright Directive (EUCD) into the national copyright frameworks of accession and candidate countries. The study focuses on digital copyright and includes specific recommendations in crucial areas, such as DRM anti-circumvention, private copying exceptions, teaching exceptions, exceptions for archives and libraries, as well as recommendations on reporting on current events and the quotation right.
Another study of importance in view of the forthcoming review of the European copyright regulatory framework is the Gowers Review of Intellectual Property. It was commissioned by the UK Chancellor of the Exchequer in December 2005 and has been published recently (Dec. 6).
The Gowers Review examines the existing IPR instruments, i.e. patents, copyright, designs and trade marks, and whether they are balanced, coherent and flexible. It sets out a number of targeted, practical recommendations to deliver a robust IP framework fit for the digital age. The principle recommendations of the Review are aimed at: (i) tackling IP crime and ensuring that rights are properly enforced; (ii) reducing the costs and complexity of the system; and (iii) reforming copyright law to allow individuals and institutions to use content in ways consistent with the digital age.

another paper

As part of my thesis work, I have long been considering what the appropriate objectives for communications regulation in a digital networked environment should look like, and how they interplay with each other.
It seemed to me that goal evaluation as an essential element of the process of designing regulatory frameworks has often been ignored by lawyers and legal scholars. The paper stresses the importance of pinpointing the precise regulatory objectives in the fluid environment of electronic communications, since, due to their technological and economic development, they have become the vital basis for communication and distribution of information in modern societies. The paper attempts an analysis of the underlying regulatory objectives in contemporary communications and seeks to put together the complex puzzle of economic and societal issues.

Here is a slightly edited and updated version of my paper. Comments are most welcome.

Monday, October 02, 2006

service public

Here is my attempt to write a piece on the concept of universal service in a digital networked environment. It is meant to be a part of the special issue on telecommunications reform of I/S: A Journal of Law and Policy for the Information Society.
My basic idea was that in the new digital and converging electronic communications ecosystem, the rationale(s) for providing universal access have been transformed. The weight is now not so much on internalising the network externalities and/or redistributional considerations, but should move towards creating and sustaining communication and information networks as a public good. In that sense, I argue that the debate on the concept of universal service should be readjusted and envisage access to content as an essential element of the scope of future universal service obligations. The latter is justified on freedom of expression and cultural diversity grounds as well.

new interesting reports

It has been quite some time since my last post. Getting married and honeymoon-ing do take time... Anyway, a lot of (arguably) more important things have happened in the meantime in the big wide world and just to keep track of a fraction of them, here are two different in nature, but both key reports, that are presently widely (and justly so) discussed.

The first one is a report on the future of the internet by the PEW Internet and American Life Project. This is the second such report and in essence a survey of the predictions of internet leaders, activists and analysts on how the internet wiill look (and act) like by 2020. A majority of the surveyed opinions agree that:

(i) A low-cost global network will be thriving and creating new opportunities in a “flattening” world.
(ii) Humans will remain in charge of technology, even as more activity is automated and “smart agents” proliferate. However, a significant 42% of the survey respondents were pessimistic about humans’ ability to control the technology in the future. This significant majority agreed that dangers and dependencies will grow beyond our ability to stay in charge of technology.
(iii) Virtual reality will be compelling enough to enhance worker productivity and also spawn new addiction problems.
(iv) Tech “refuseniks” will emerge as a cultural group characterized by their choice to live off the network. Some will do this as a benign way to limit information overload, while others will commit acts of violence and terror against technology-inspired change.
(v) People will wittingly and unwittingly disclose more about themselves, gaining some benefits in the process even as they lose some privacy.
(vi) English will be a universal language of global communications, but other languages will not be displaced. Indeed, many felt other languages such as Mandarin, would grow in prominence.

There was strong dispute about those futuristic scenarios among the survey respondents. Those who raised challenges believe that governments and corporations will not necessarily embrace policies that will allow the network to spread to under-served populations; that serious social inequalities will persist; and that “addiction” is an inappropriate notion to attach to people’s interest in virtual environments.
The experts and analysts also split evenly on a central question of whether the world will be a better place in 2020 due to the greater transparency of people and institutions afforded by the internet: 46% agreed that the benefits of greater transparency of organizations and individuals would outweigh the privacy costs and 49% disagreed.
/The whole report is available here/


The second key report was issued some time ago and is the result of the work of the Study Group of the International Law Commission and was finalised by Martti Koskenniemi of the University of Helsinki. The report is entitled "Fragmentation of International Law: Difficulties Arising from the Diversification and Expansion of International Law". In that sense, it (incl. the conclusions of the study group) provide an interesting framework for discussion of the topics of the NCCR "International Trade Regulation: From Fragmentation to Coherence" (to which I am happy to be a part of).
Various opinions have already been expressed about the report and its recommendations. Look at this excellent blog on international economic law and policy for some of these comments (thanks a lot, Egle).

Thursday, July 27, 2006

the generative internet

There is a paper by Jonathan Zittrain on internet generativity that is causing much stir. And it's worth it.

Wednesday, July 26, 2006

on fragmentation

Having mentioned fragmentation and coherence and the broader framework, where hopefully my work will fit, there is a piece by Gunther Teubner and Andreas Fischer-Lescano on the concept of fragmentation (or should I say on the often misinterpreted concept of fragmentation) that deserves proper attention. Any comments on its complexity and possible further elaborations will be much appreciated.

intro

The nature of this blog is multifarious. It will serve as an archive of personal and other people's findings, a diary and a communication platform. It is in essence an addendum to my research activities as part of the comprehensive undertaking of the Swiss National Centre of Competence in Research "International Trade Regulation - From Fragmentation to Coherence" (www.nccr-trade.org). The latter comprises 12 research initiatives that could be broadly subsumed under the overarching pursuit for coherence in international law. My humble contribution is to the subproject "eDiversity: The Legal Protection of Cultural Diversity in a Digital Networked Environment", which is led by Christoph Beat Graber of the i-call centre of the University of Lucerne (www.i-call.ch).
This blog remains however a personal endeavour and all mistakes mine.