Thursday, March 05, 2009

not taking care

In the last months (year!) I have not been sufficiently taking care of my blog and it is in a poor state. Funnily enough this has happened not because I have nothing to blog about but rather because I have too much to do and little time to blog about it. That is not a very good excuse, I know...
As some sort of justification: I have been posting some pieces to the ijclp (International Journal of Communications Law and Policy) blog and research updates are available on my ssrn page.

Monday, September 01, 2008

'born digital' by urs gasser and john palfrey has been released


I would have reported about the release of 'Born Digital' anyway because it is a timely and most important contribution to the topical issues of understanding how young people who have been born digital tick and what the impact of new media upon the forming of this generation is (and vice versa).
I find it even more urgent to report about the book release however because I am proud to know both of its authors, Urs Gasser and John Palfrey, and happen to know how brilliantly they think and how well they can express their insights.

As a digital migrant, I look forward to holding the book in my hands (kindle is not available in Europe yet so I must give myself happy with having a good old hard copy).

Here is more about the book and the whole digital natives project.
Do buy the book because we indeed need to know what is happening with the digital natives and not just make unqualified guesses as digital migrants.


PS: Congrats, Urs and John! Well done, as always.

Monday, June 02, 2008

call for papers a2k

As a member of the editorial board of the International Journal of Communications Law and Policy (IJCLP), I am delighted to let you know that the Information Society Project of Yale Law School and the IJCLP have just announced their fifth interdisciplinary writing competition and call for papers in conjunction with the A2K3 conference taking place on September 8-10, 2008 in Geneva, Switzerland.
Students, scholars, policy-makers, technologists, activists, and industry representatives are invited to submit papers on access to knowledge (A2K) and communications law and policy for publication by the IJCLP. Submissions must be received by June 30th 2008.

Panel topics for this year’s conference include:

• The Development Agenda at WIPO
• A2K and Human Rights
• A2K and Global Trade
• Research and Capacity-Building for A2K
• Prize Mechanisms for Innovation
• Copyright Exceptions and Limitations
• Media and Communication Rights
• Open Business Models
• Technologies for Access

All submissions should be written in English and submitted in .doc, .rtf, .odt, or .pdf format. Submissions should conform to academic citation standards, be no longer than 15,000 words, and must include an abstract no longer than 250 words. Submissions should be e-mailed simultaneously to Simone Francesco Bonetti, chief editor of the IJCLP (simo.bonetti@tiscali.it); Lea Shaver, director of the Access to Knowledge Program of the Yale ISP (lea.shaver@yale.edu) and Shay David, Co-Founder and CTO of Kaltura (shay.david@kaltura.com).

We very much look forward to your intriguing submissions.

Thursday, November 22, 2007

ict interoperability and einnovation

Here is a brief note on the release of a study on interoperability and einnovation, which gives an excellent analysis of all the pertinent issues and is a major contribution to this important (and in my opinion underresearched) topic. The study is a product of the colloborative work of the Berkman Center of Harvard Law School and the Research Center for Information Law at the University of St. Gallen with two key researchers involved - John Palfrey and Urs Gasser - whom I have the great pleasure to know and to have worked with.
The study comprises a White Paper and three case studies on ict interoperability and einnovation. The latter explore in turn digital rights management in online and offline music distribution models; digital identity systems; and web services.
The core finding is that increased levels of ict interoperability generally foster innovation. Interoperability contributes in addition to other socially desirable outcomes such as consumer choice, access to content and diversity.
As Urs points out the investigation reached other, more nuanced conclusions:
"Interoperability does not mean the same thing in every context and as such, is not always good for everyone all the time. For example, if one wants completely secure software, then that software should probably have limited interoperability. In other words, there is no one-size-fits-all way to achieve interoperability in the ICT context.
Interoperability can be achieved by multiple means including the licensing of intellectual property, product design, collaboration with partners, development of standards and governmental intervention. The easiest way to make a product from one company work well with a product from another company, for instance, may be for the companies to cross license their technologies. But in a different situation, another approach (collaboration or open standards) may be more effective and efficient.
The best path to interoperability depends greatly upon context and which subsidiary goals matter most, such as prompting further innovation, providing consumer choice or ease of use, and the spurring of competition in the field.
The private sector generally should lead interoperability efforts. The public sector should stand by either to lend a supportive hand or to determine if its involvement is warranted".

Against the above backdrop, the White Paper proposes a set of guidelines to assist businesses and governments in determining the best way to achieve interoperability in a given situation:
(i) identify what the actual end goal or goals are. The goal is not interoperability per se, but rather something to which interoperability can lead, such as innovation or consumer choice.
(ii) consider the facts of the situation. The key variables that should be considered include time, maturity of the relevant technologies and markets and user practices and norms.
(iii) in light of these goals and facts of the situation, consider possible options against the benchmarks proposed by the study: effectiveness, efficiency and flexibility.
(iv) remain open to the possibility of one or more approaches to interoperability, which may also be combined with one another to accomplish interoperability that drives innovation.
(v) in some instances, it may be possible to convene all relevant stakeholders to participate in a collaborative, open standards process. In other instances, the relevant facts may suggest that a single firm can drive innovation by offering to others the chance to collaborate through an open API, such as Facebook’s recent success in permitting third-party applications to run on its platform. But long-term sustainability may be an issue where a single firm makes an open API available according to a contract that it can change at any time.
(vi) In the vast majority of cases, the private sector can and does accomplish a high level of interoperability on its own. The state may help by playing a convening role, or even in mandating a standard on which there is widespread agreement within industry after a collaborative process. The state may need to play a role after the fact to ensure that market actors do not abuse their positions.

I consider the above guidelines succintly formulated and especially valuable for further research efforts in the field of interoperability and innovation. I am honoured to have been part of the discussions on these issues during the first interoperabilty workshop convened by Urs in Weissbad, Switzerland.

Friday, October 12, 2007

ec electronic communications and competition law




My phd thesis is finally out as a book with Cameron May.

It's entitled "EC Electronic Communications and Competition Law" and in essence attempts to answer the question of whether generic competition law rules can be a sufficient and efficient regulator of the electronic communications sector.

I argue that this question needs to be examined not in the conventional contexts of sector specific rules versus competition rules or deregulation versus regulation but in a broader governance context. The reader is provided with an insight into the workings of the communications sector which is exposed as being network-bound, converging, dynamic and endowed with a special societal function. This together with the scrutiny of the underlying regulatory objectives paints the most comprehensive picture of the in the communications sector and allows for a nuanced answer to the above question, and ultimately, for the designing of multi-faceted, hybrid regulatory toolbox.

The enquiry is based on the European Community competition rules and the current communications regime but the conclusions drawn are applicable to other regulatory environments as well. Some insights are also of interest today in the context of the net neutrality discussions, or more generally with regard to the question of how the regulation of infrastructure influences content flows.

digital natives project

Here is a just brief note to pay attention to the very intriguing project of Urs Gasser and John Palfrey called Digital Natives. It is a collaborative undertakng of the Berkman Center for Internet and Society at Harvard Law School and the Research Center for Information Law at the University of St. Gallen.
The project's objective is the obtainment of better understanding of the experience of those young people "born digital" with new digital media, such as the Internet, cell phones and related technologies. By gaining insight into how digital natives make sense of their interactions in this digital landscape, Urs and John want to address the issues these practices raise, learn how to harness the opportunities their digital fluency presents, and shape our regulatory and educational frameworks in a way that advances the public interest.
Key questions put forward are: (i) How do we take best advantage of the benefits of online identities while managing issues of privacy and safety? (ii) How can we envision intellectual property law that allows the exciting "rip, mix, burn" (and mash!) creativity and culture to thrive? (iii) How can we learn (and teach others) to best navigate the information overload we face in today's digital environment?
For more details, see Urs' blog where he elaborates further upon the fundamental ideas of the Digital Natives project on the occasion of the OECD-Canada Forum on the Participative Web that took place in Ottawa on 3 October 2007.
The broader context of the latter event is of specific interest to my own research of the changing digital media environment, including changing models of consumer and business behaviour, and their impact on governance models in general, and on the diversity of cultural expressions in particular.

Update 10 December 07: Here are some new thoughts of Urs and John on the ongoing book project shared with students at Harvard and St. Gallen.
And some comments by Henry Jenkins on what 'digital immigrants' are (may be).

Monday, June 11, 2007

symposium 'traditional cultural expressions in a digital environment', 8-9 June, Lucerne

This weekend, the time for our eDiversity international symposium on traditional cultural expressions (TCE) in a digital environment finally came.
After almost 6 months of careful planning and organisation, inviting experts, booking venues and menues, going through every detail of the programme, both logistically and substantively, the morning of the 8th of June arrived.
It was a beautiful day in Lucerne and a good start for a very interesting, open and intensive discussion on the TCE pertinent issues.
The debates were unique at least in a couple of points. The first one was the interdisciplinary character of the contributions and the diverse backgrounds of the speakers that sought to reach out to other disciplines (history, philosophy, social sciences and law) and add value to the TCE discussions. The second point of distinction had to do with situating the debates in the new digital environment and seeking to address its repercussions (both positive and negative) for the protection and promotion of TCE.
With the benefit of hindsight, although as an organiser not entirely impartial in my view, a third point worth mentioning is the interesting group of experts and their willingness for open dialogues and exchange of ideas (a rare thing, I would say).
Finally, a word on my own contribution on new technologies and their impact upon the protection and promotion of TCE (ppt here; paper here). My main objective was to reveal that the digital technologies do change the entire environment where TCE are to be protected and promoted. Digital technologies and their far-reaching economic and societal implications (using the long tail and the participative web as examples) cannot be exhausted in the TCE discussions by mere references to the negative impact of these upon copyright enforcement and ICTs instrumentalisation for development purposes. A broader conceptual understanding is needed. Upon the latter, one could then take the concrete steps of putting together a multi-faceted flexible toolbox, which may properly address the specificities of TCE beyond copyright (at lower transcation costs too).
I was extremely lucky to have Herbert Burkert of the University of St. Gallen as a formal discussant and Sacha Wunsch-Vincent of OECD as a moderator. They put things into the right perspective and draw the precise contours of the topic framing it into the symposium's objectives. I am most grateful for their contributions, in particular to Herbert, who not only supported my views but also challenged the public and livened up the debate with his rhetotical skills.

Here are some visual impressions from the TCE discussions.

creative industries workshop

A week ago I had the pleasure to visit a most interesting Exploratory Workshop on "Rethinking Added Value in the Creative Industries", convened by Christoph Weckerle of the Research Unit Creative Industries at the University of Art and Design Zurich (hgk Zurich, 29-31 May 2007). The event gathered together people interested in the creative industries but having diverse backgrounds and personal research agendas. The goal was to agree on some common ground definitions and future research objectives, which we could evetually take up in a research network funded by the European Science Foundation.
After some general examinations of the pertinent issues, it was clear that the wide variety of positions was rather difficult to reconcile, so we set more concrete tasks and worked on them in smaller gropus. With astonishingly promising results.
By the end of the first day, we could all agree on a list of topics that need to be taken up in order to properly assess the workings of the creative industries and how should they be efficiently and sustainably supported.
These topics were 6 and included in a random order (or the way I have them in my random notes):
(i) institutions, regulation and public policies (including culture and cultural diversity);
(ii) labour, individuals, skills;
(iii) organisation, firms, business models;
(iv) demand (broadly defined incl. users' reactions, user created content, etc.);
(v) localisation;
(vi) innovation, learning and technology.
/Brian Moeran convincingly and justly in my opinion insisted also on including 'values' as a cross-cutting category/.
On the second day, we (in 3 groups) elaborated the most important questions that one is to formulate within these 6 categories (quite extensive, so I won't put it up here).
It seems in the end that we were more successful than we were supposed to be at the level of an exploratory workshop. Perhaps that was thanks to the excellent organisation and atmosphere (working partly in the sun at the balcony of the hgk) provided by Christoph and his team.

For me as a lawyer, the workshop was a particularly fascinating experience to be surrounded by non-lawyers only :) and to test some of the set definitions legal scholars use in their strict legal analyses (or often poor attempts for interdisciplinarity). The notions of 'creative industries', 'cultural industries', 'culture', 'diversity', even 'law', had broader meanings in the workshop's discusssions, were less policy-laden but rather pragmatic in my view.
(Commodification of culture was certainly not a dirty word).

Sunday, March 11, 2007

paper on the new ec 'television without frontiers' directive

Last week I finished a piece I have been working on for some time now. It is on the revision of the EC audiovisual media regulation, in particular on the 'Audiovisual Media Services Directive', which will replace the infamous 'Television without Frontiers'. I examine the changes the new Communityact will bring about and their likely impact on cultural diversity in European media.
Here is an abstract of the paper:
In the profoundly changing and dynamic world of contemporary audiovisual media, what has remained surprisingly unaffected is regulation. In the European Union, the new Audiovisual Media Services Directive (AVMS), proposed by the European Commission on 13 December 2005, should allegedly rectify this situation. Amending the existing Television without Frontiers Directive, it should offer a 'fresh approach' and meet the challenge of appropriately regulating media in a complex environment. It is meant to achieve a balance between the free circulation of TV broadcast and new audiovisual media and the preservation of values of cultural identity and diversity, while respecting the principles of subsidiarity and proportionality inherent to the Community. The purpose of this paper is to examine whether and how the changes envisaged to the EC audiovisual media regime might influence cultural diversity in Europe. It addresses subsequently the question of whether the new AVMS properly safeguards the balance between competition and the public interest in this regard, or whether cultural diversity remains a mere political banner.
Here is the paper itself (published in an edited version in the International Journal of Cultural Property).

Friday, January 26, 2007

the 2006 neiman foundation report


Plenty of good things on convergence, blogging and grass root journalism - in Harvard University's Neiman Foundation for Journalism 2006 Report under the prophetic title of "Goodbye Gutenberg".

Thursday, January 25, 2007

impressions from Weissbad

Ville uploaded some pictures from the Weissbad eInnovation and Interoperability Workshop. /Thanks a lot, Ville/

Tuesday, January 23, 2007

a new book on community content

Herkko Hietanen of creative commons Finland has just published a book co-authored with Ville Oksanen and Mikko Välimäki (valuable contributors to the eInnovation workshop and fun dinner companions). The book provides an excellent overview of the law and policy of community created content - a domain, which is novel and a bit uncomfortable for our IPR-based thinking. The book is downloadable under the cc-license.
/good work, Ville and Mikko/

Monday, January 22, 2007

workshop on e-innovation and ict interoperability in the beautiful appenzell

I attended a most stimulating workshop on e-innovation and ICT interoperability this weekend. The event was organised by the Research Center for Information Law of the University of St. Gallen and the Berkman Center for Internet and Society of the Harvard Law School.
It gathered together an incredibly eclectic group of 20 experts from the ICT industries, academia and key policy bodies.
The workshop was an essential part of a transatlantic research undertaking, which has taken upon the amibitious task of providing a deeper and more comperehensive understanding of the drivers and inhibitors of interoperability in a digital networked environment and how policy makers should/could approach them.
The challenge of this task is best depicted by John Palfrey of the Berkman Center:
"As with many of the other interesting topics in our field, interop makes clear the difficulty of truly understanding what is going on without having 1) skill in a variety of disciplines, or, absent a super-person who has all these skills in one mind, an interdisciplinary group of people who can bring these skills to bear together; 2) knowledge of multiple factual settings; and 3) perspectives from different places and cultures. While we’ve committed to a transatlantic dialogue on this topic, we realize that even in so doing we are still ignoring the vast majority of the world, where people no doubt also have something to say about interop. This need for breadth and depth is at once fascinating and painful".

Urs Gasser of the University of St.Gallen suggested a very interesting framework to meet the above challenge, which moves away from a substantive towards a more procedural approach.
Here are the proposed contours of such a framework:
In what area and context do we want to achieve interoperability? At what level and to what degree? To what purpose (policy goals such as innovation) and at what costs?
What is the appropriate approach (e.g. IP licensing, technical collaboration, standards) to achieve the desired level of interoperability in the identified context? Is ex ante or ex post regulation necessary, or do we leave it to the market forces?
If we decide to pursue a market-driven approach to achieve it, are there any specific areas of concerns and problems, respectively, that we - from a public policy perspective – still might want to address (e.g. disclosure rules aimed at ensuring transparency)?
If we decide to pursue a market-based approach to interoperability, is there a proactive role for governments to support private sector attempts aimed at achieving interoperability (e.g. promotion of development of industry standards)?
If we decide to intervene (either by constraining, leveling, or enabling legislation and/or regulation), what should be the guiding principles (e.g. technological neutrality; minimum regulatory burden; etc.)?

My personal account of the interoperability event is highly positive. The discussions were intense and the viewpoints contentious. The talks with the Microsoft, Intel and IBM experts gave me also some unique insights to the workings of the industry.
The organisation was perfect (thanks a lot, Richard) and the atmosphere, despite the mixture of different people, quite interoperably great.

Wednesday, December 20, 2006

the code is the law

I am still not entirely convinced that the 'code is the law' but code v2 by Lawrence Lessig definitely 'rules'. Code v2 is a sort of a collective update (through a wiki) of Lessig's original 'Code and Other Laws of Cyberspace', which was published in 1999.
The book is downloadable here and it can be construed (in terms of creation, copyright protection (under the creative commons), etc.) as the very epitome of web 2.0 in an academic context.

Friday, December 08, 2006

new important reports on IPRs

There is an excellent new report by Urs Gasser and Silke Ernst. It provides some recommendations on the implementation of the EU Copyright Directive (EUCD) into the national copyright frameworks of accession and candidate countries. The study focuses on digital copyright and includes specific recommendations in crucial areas, such as DRM anti-circumvention, private copying exceptions, teaching exceptions, exceptions for archives and libraries, as well as recommendations on reporting on current events and the quotation right.
Another study of importance in view of the forthcoming review of the European copyright regulatory framework is the Gowers Review of Intellectual Property. It was commissioned by the UK Chancellor of the Exchequer in December 2005 and has been published recently (Dec. 6).
The Gowers Review examines the existing IPR instruments, i.e. patents, copyright, designs and trade marks, and whether they are balanced, coherent and flexible. It sets out a number of targeted, practical recommendations to deliver a robust IP framework fit for the digital age. The principle recommendations of the Review are aimed at: (i) tackling IP crime and ensuring that rights are properly enforced; (ii) reducing the costs and complexity of the system; and (iii) reforming copyright law to allow individuals and institutions to use content in ways consistent with the digital age.

another paper

As part of my thesis work, I have long been considering what the appropriate objectives for communications regulation in a digital networked environment should look like, and how they interplay with each other.
It seemed to me that goal evaluation as an essential element of the process of designing regulatory frameworks has often been ignored by lawyers and legal scholars. The paper stresses the importance of pinpointing the precise regulatory objectives in the fluid environment of electronic communications, since, due to their technological and economic development, they have become the vital basis for communication and distribution of information in modern societies. The paper attempts an analysis of the underlying regulatory objectives in contemporary communications and seeks to put together the complex puzzle of economic and societal issues.

Here is a slightly edited and updated version of my paper. Comments are most welcome.

Monday, October 02, 2006

service public

Here is my attempt to write a piece on the concept of universal service in a digital networked environment. It is meant to be a part of the special issue on telecommunications reform of I/S: A Journal of Law and Policy for the Information Society.
My basic idea was that in the new digital and converging electronic communications ecosystem, the rationale(s) for providing universal access have been transformed. The weight is now not so much on internalising the network externalities and/or redistributional considerations, but should move towards creating and sustaining communication and information networks as a public good. In that sense, I argue that the debate on the concept of universal service should be readjusted and envisage access to content as an essential element of the scope of future universal service obligations. The latter is justified on freedom of expression and cultural diversity grounds as well.

new interesting reports

It has been quite some time since my last post. Getting married and honeymoon-ing do take time... Anyway, a lot of (arguably) more important things have happened in the meantime in the big wide world and just to keep track of a fraction of them, here are two different in nature, but both key reports, that are presently widely (and justly so) discussed.

The first one is a report on the future of the internet by the PEW Internet and American Life Project. This is the second such report and in essence a survey of the predictions of internet leaders, activists and analysts on how the internet wiill look (and act) like by 2020. A majority of the surveyed opinions agree that:

(i) A low-cost global network will be thriving and creating new opportunities in a “flattening” world.
(ii) Humans will remain in charge of technology, even as more activity is automated and “smart agents” proliferate. However, a significant 42% of the survey respondents were pessimistic about humans’ ability to control the technology in the future. This significant majority agreed that dangers and dependencies will grow beyond our ability to stay in charge of technology.
(iii) Virtual reality will be compelling enough to enhance worker productivity and also spawn new addiction problems.
(iv) Tech “refuseniks” will emerge as a cultural group characterized by their choice to live off the network. Some will do this as a benign way to limit information overload, while others will commit acts of violence and terror against technology-inspired change.
(v) People will wittingly and unwittingly disclose more about themselves, gaining some benefits in the process even as they lose some privacy.
(vi) English will be a universal language of global communications, but other languages will not be displaced. Indeed, many felt other languages such as Mandarin, would grow in prominence.

There was strong dispute about those futuristic scenarios among the survey respondents. Those who raised challenges believe that governments and corporations will not necessarily embrace policies that will allow the network to spread to under-served populations; that serious social inequalities will persist; and that “addiction” is an inappropriate notion to attach to people’s interest in virtual environments.
The experts and analysts also split evenly on a central question of whether the world will be a better place in 2020 due to the greater transparency of people and institutions afforded by the internet: 46% agreed that the benefits of greater transparency of organizations and individuals would outweigh the privacy costs and 49% disagreed.
/The whole report is available here/


The second key report was issued some time ago and is the result of the work of the Study Group of the International Law Commission and was finalised by Martti Koskenniemi of the University of Helsinki. The report is entitled "Fragmentation of International Law: Difficulties Arising from the Diversification and Expansion of International Law". In that sense, it (incl. the conclusions of the study group) provide an interesting framework for discussion of the topics of the NCCR "International Trade Regulation: From Fragmentation to Coherence" (to which I am happy to be a part of).
Various opinions have already been expressed about the report and its recommendations. Look at this excellent blog on international economic law and policy for some of these comments (thanks a lot, Egle).

Thursday, July 27, 2006

the generative internet

There is a paper by Jonathan Zittrain on internet generativity that is causing much stir. And it's worth it.

Wednesday, July 26, 2006

on fragmentation

Having mentioned fragmentation and coherence and the broader framework, where hopefully my work will fit, there is a piece by Gunther Teubner and Andreas Fischer-Lescano on the concept of fragmentation (or should I say on the often misinterpreted concept of fragmentation) that deserves proper attention. Any comments on its complexity and possible further elaborations will be much appreciated.