Ed Felten's blog

DRM as Folding Chair

Frank Field offers an interesting analogy:

DRM is a folding chair – specifically, it’s one of those folding chairs that people use after shoveling out the snow from a parking space that they use to claim it after they drive away.

For those of you who don’t have to cope with snow, I know that sounds incredible (it was to me when I moved here from South Carolina), but this is a real problem in cities with limited parking and poor snow removal. People who shovel out their cars will have a ratty old folding chair or an old street cone or, if they’re feeling really aggressive, an old kid’s toy that they will plant squarely in the middle of the shoveled-out parking space. This object "marks" the spot, and everyone knows what it means – this is my spot: park here and you will suffer the consequences.

This struck me, in part, because it echoes an example I like to use. When teaching about the theory of property, I start with a class discussion about whether there should be a property right in shoveled-out parking spaces. It's a helpful example because everybody understands it, few people have a predisposition one way or the other, and it exposes most of the tradeoffs involved in creating a new form of property.

As Frank describes it, "ownership" of a Cambridge parking space is effected not by any legal right but by the threat that noncompliant cars will be vandalized. This is a key distinction. Typically, some of my students end up endorsing a limited property right in shoveled-out parking spaces, but my guess is that they would feel differently about a system created by private decree and "enforced" by vandalism.

This is where the analogy to DRM gets complicated. DRM systems don't trash the computers of noncompliant users, so they don't rely on the same kind of intimidation that Frank's folding-chair owners use.

But Frank's analogy does work very nicely in one dimension. DRM developers, like Cambridge folding-chair owners, are trying to establish a social norm that people should keep out of the territory they claim. Such claims should be evaluated on their merits, and not just taken for granted.

Tagged:  

Japanese P2P Author Arrested

Japanese police have arrested the author of Winny, a peer-to-peer application popular in Japan, according to a story on ABC News's Australian site. (Reportedly, a more detailed article is available in Japanese.) Isamu Kaneko, a 33 year old Computer Engineering "guest research associate" at Tokyo University, was arrested for conspiracy to infringe copyright. If convicted, he faces a maximum penalty of three years in prison. Winny's author had previously been known only by the online moniker "47," but police apparently used the records of some kind of online bulletin board to identify him.

UPDATE (10:20 AM): Corrected Mr. Kaneko's job title. I originally wrote "graduate student" based on the ABC article, but Seth Finkelstein pointed me to an authoritative page at Tokyo University with the accurate title.

Is the U.S. Losing its Technical Edge?

The U.S. is losing its dominance in science and technology, according to William J. Broad's article in the New York Times earlier this week. The article looked at the percentage of awards (such as Nobel Prizes in science), published papers, and issued U.S. patents that go to Americans, and found that the U.S. share had declined significantly.

Although the trend is real, the article does oversell it. For example, the graph that appears at the top shows the number of papers published in physics journals, by the author’s country of origin. Classifying based on country of origin undercounts American scientists, many of whom were born in other countries. Bear in mind, too, that the U.S. lead is smaller in mature fields like physics than it is in developing fields like computer science, so focusing mainly on mature fields will make the U.S. position look worse than it really is.

Yet even by more careful measures, the consensus seems to be that the overall U.S. lead is narrowing. What are the implications of this for Americans?

It all depends on whether you see science and technology as a zero-sum game. If you view science and technology as instruments of national power (both hard military power and soft cultural power), then technical advancement is a zero-sum game and what matters most is how we compare to other countries. But if you see science and technology as creating knowledge and prosperity that diffuse out to the population as a whole, then technical advancement is not a zero-sum game, and you should welcome the flow of knowledge across borders – in both directions. Both views have some validity.

The clash between these two views seems most extreme in immigration policy. As I noted above, immigration has been a big contributor to the quality of U.S. science. But now, more than any time I can remember, U.S. immigration policy is suspicious of foreigners, and especially those who want to work in technical fields. Regardless of the wisdom of this policy – and I think it is tilted too far toward suspicion – we have to recognize the price we pay by adopting it (not to mention the price paid by the overwhelming majority of would-be immigrants from whom we have nothing to fear). Overseas applications to U.S. graduate schools in computer science and other technical fields seem to have dropped sharply this year; and that's a very bad sign.

I'm glad to see that the health of our technical communities is starting to become more of a national priority. In today's climate, national competitiveness will be an increasingly effective argument against over-regulation of technology. And after nearly a decade of seeing parts of my technical field turned into legal and regulatory minefields, I would like nothing more than to have the tide turn so that policymakers think about how to make technologists' jobs easier rather than harder.

California Decertifies Touch-Screen Voting

Looks like I missed the significance of this story last week (by Kim Zetter at Wired News). California Secretary of State Kevin Shelley decertified all touch-screen voting machines, not just the Diebold systems whose decertification had been recommended by the state's voting-systems panel.

Some counties may be able to get their machines recertified if they can meet a set of security requirements: the machines must be certified by the Federal government, provide a voter-verified paper trail, have a security plan that meets certain criteria, have source code disclosed to the Secretary of State and his designees (subject to reasonable confidentiality provisions), have a documented development process, no be modified at the last minute, have no network connections (including Internet, wireless, or phone connections), and a few other requirements.

Shelley condemned Diebold's actions in California, calling them "despicable" and "deceitful tactics". He referred evidence of possible fraud by Diebold to the state Attorney General's office.

In a related story, Ireland recently decided not to use e-voting in their next election, due to security concerns.

Tagged:  

Dare To Be Naive

Ernest Miller at CopyFight has an interesting response to my discussion yesterday of the Broadcast Flag. I wrote that the Flag is bad regulation, being poorly targeted at the goal of protecting TV broadcasts from Internet redistribution. Ernie replies that the Flag is actually well-targeted regulation, but for a different purpose:

[Y]ou'd have to be an idiot to think that the broadcast flag would prevent HDTV content from making it onto the internet. Since I don't believe that the commissioners are that stupid, I can only conclude that the FCC is acting quite cynically in support of an important constituency of theirs, the broadcasters *cough*regulatorycapture*cough*.

In other words, the purported purpose of the broadcast flag (to prevent HDTV from getting onto the internet) is not the real purpose of the broadcast flag, which appears to be to give content providers more control over the average citizen's ability to make use of media.

Ernie's theory, that the movie industry and the FCC are using "content protection" as a smokescreen to further a secret agenda of controlling media technology, fits the facts pretty well. And quite a few experienced lobbyists seem to believe it. Still, I don't think it's right to argue against the Broadcast Flag on that basis.

First, even if you believe the theory, it's often a useful debating tactic to pretend that the other side actually believes what they say they believe. It's hard to prove that someone is lying about their own beliefs and motivations; it can be much easier to prove that their asserted beliefs don't justify their conclusions. And proving that the official rationale for the Flag is wrong would do some good.

Second, if Ernie's theory is right, the fix is in and there's not much we can do about future Broadcast Flag type regulation. If we want to change things, we might as well act on the assumption that it matters whether the official rationale for the Flag is right.

And finally, I am convinced that at least some people in the movie industry, and at least some people at the FCC, actually believe the official rationale. I think this because of what these people say in private, after a few (literal or metaphorical) beers, and because of how they react when the official rationale for the Flag is challenged. Even in private, industry or FCC people often react to criticism of the official rationale with real passion and not just with platitudes. Either these (non-PR) people are extraordinarily good at staying on-message, or they really believe (as individuals) what they are saying.

So although Ernie's theory is very plausible, I will dare to be na

Where Does Your Government Stand on the WIPO Broadcasting Treaty?

The Union for the Public Domain is asking for help in surveying national governments about their (the governments') positions on the WIPO Broadcast Treaty. The UPD is looking for volunteers who are willing to contact the appropriate representatives of their national government, ask the representatives a series of questions provided by the UPD, record the answers, and submit them to the UPD. The UPD will collate the results and create a handy summary of where each government stands on the Treaty.

Regulating Stopgap Security

I wrote previously about stopgap security, a scenario in which there is no feasible long-term defense against a security threat, but instead one resorts to a sequence of measures that have only short-term efficacy. Today I want to close the loop on that topic, by discussing how government might regulate fields that rely on stopgap security. I’ll assume throughout that government has some reason (which may be wise or unwise) to regulate, and that the regulation is intended to support those deploying stopgap measures to defend their systems.

The first thing to note is that stopgap areas are inherently difficult to regulate, as stopgap security causes the technological landscape to change even faster than usual. The security strategy is to switch rapidly between short-term measures; and, because adversaries tend to defeat whole families of measures at once, the measures adopted tend to vary widely over time. It is very difficult for any regulatory scheme to keep up. In stopgap areas, regulation should be viewed with even more skepticism than usual.

If we must regulate stopgap areas, the regulation must strive to be technology-neutral. Regulation that mandates one technical approach, or even one family of approaches, is likely to block necessary adaptation. Even if no technology is mandated, regulations tend to encode technological assumptions, in their basic structure or in how they define terms; and these assumptions are likely to become invalid before long, making the regulatory scheme fit the defensive technology poorly.

One of the rules for stopgap security technology is to avoid approaches that impose a long-term cost in order to get a short-term benefit. The same is true for regulation. A regulatory approach should not impose long-term costs (such as compliance costs) in order to bolster a technical approach that offers only short-term benefits. Any regulation that requires all devices to do something, for the indefinite future, would therefore be suspect. Equally so, any regulation that creates compatibility barriers between compliant devices and non-compliant devices would be suspect, since the incompatibility would frustrate attempts to stop using the compliant technology once it becomes ineffective.

Finally, it is important not to shift the costs of a security strategy away from the people who decide whether to adopt that strategy. Stopgap measures carry an unusually high risk of having a disastrous cost-benefit ratio; in the worst case they impose significant long-term costs in exchange for limited, short-term benefit. If the party choosing which stopgap to use is also the party who has to absorb any long-term cost, then that party will be suitably cautious. But if regulation shifts the potential long-term cost onto somebody else, then the risk of disastrous technical choices gets much larger.

By this point, alert readers will be thinking "This sounds like an argument against the broadcast flag." Indeed, the FCC’s broadcast flag violates most of these rules: it mandates one technical approach (providing flexibility only within that approach), it creates compatibility barriers between compliant and non-compliant devices, and it shifts the long-term cost of compliance onto technology makers. How can the FCC have made this mistake? My guess is that they didn't, and still don't, realize that the broadcast flag is only a short-term stopgap.

Tagged:  

Off-the-record Conferences

In writing about the Harvard Speedbump conference, I noted that its organizers declared it to be off the record, so that statements made or positions expressed at the conference would not be attributed publicly to any particular person or organization. JD Lasica asks, quite reasonably, why this was done: "Can someone explain to me why a conference needs to be 'off the record' in order for people to exchange ideas freely? What kind of society are we living in?"

This is the second off-the-record conference I have been to in my twenty years as a researcher. The first was a long-ago conference on parallel computing. Why that one was off the record was a mystery to me then, and it still is now. Nobody there had anything controversial to say, and no participant was important enough that anyone outside a small research community would even care what was said.

As to the recent Speedbump conference, I can at least understand the motivation for putting it off the record. Some of the participants, like Cary Sherman from RIAA and Fritz Attaway from MPAA, would be understood as speaking for their organizations; and the hope was that such people might depart from their talking points and speak more freely if they knew their statements wouldn't leave that room.

Overall, there was less posturing at this meeting than one usually sees at similar meetings. My guess is that this wasn't because of the off-the-record rule, but just because some time has passed in the copyright wars and cooler heads are starting to prevail. Nobody at the meeting took a position that really surprised me.

As far as I could tell, there were only two or three brief exchanges that would not have happened in an on-the-record meeting. These were discussions of various deals that either might be made between different entities, or that one entity had quietly offered to another in the past. For me, these discussions were less interesting than the rest of the meeting: clearly no deal could be made in a room with thirty bystanders, and the deals that were discussed were of the sort that savvy observers of the situation might have predicted anyway.

In retrospect, it looks to me like the conference needn't have been off the record. We could just as easily have followed the rule used in at least one other meeting I have attended, with everything on the record by default, but speakers allowed to place specific statements off the record.

To some extent, the off-the-record rule at the conference was a consequence of blogging. In pre-blog days, this issue could have been handled by not inviting any reporters to the meeting. Nowadays, at any decent-sized meeting, odds are good that several of the participants have blogs; and odds are also good that somebody will blog the meeting in real time. On the whole this is a wonderful thing; nobody has the time or money to go to every interesting conference.

I have learned a lot from bloggers' conference reports. It would be a shame to lose them because people are afraid of being quoted.

[My plan still calls for one more post on the substance of the conference, as promised yessterday.]

Tagged:  

Stopgap Security

Another thing I learned at the Harvard Speedbumps conference (see here for a previous discussion) is that most people have poor intuition about how to use stopgap measures in security applications. By "stopgap measures" I mean measures that will fail in the long term, but might do some good in the short term while the adversary figures out how to work around them. For example, copyright owners use simple methods to identify the people who are offering files for upload on P2P networks. It's only a matter of time before P2P designers deploy better methods for shielding their users' identities so that today’s methods of identifying P2P users no longer work.

Standard security doctrine says that stopgap measures are a bad idea – that the right approach is to look for a long-term solution that the bad guys can't defeat simply by changing their tactics. Standard doctrine doesn't demand an impregnable mechanism, but it does insist that a good mechanism must not become utterly useless once the adversary adapts to it.

Yet sometimes, as in copyright owners' war on P2P infringement, there is no good solution, and stopgap measures are the only option you have. Typically you'll have many stopgaps to choose from. How should you decide which ones to adopt? I have three rules of thumb to suggest.

First, you should look carefully at the lifetime cost of each stopgap measure, compared to the value it will provide you. Since a measure will have a limited – and possibly quite short – lifetime, any measure that is expensive or time-consuming to deploy will be a loser. Equally unwise is any measure that incurs a long-term cost, such as a measure that requires future devices to implement obsolete stopgaps in order to remain compatible. A good stopgap can be undeployed fully once it has become obsolete.

Second, recognize that when the adversary adapts to one stopgap, he may thereby render a whole family of potential stopgaps useless. So don't plan on rolling out an endless sequence of small variations on the same method. For example, if you encrypt data in transit, the adversary may shift to a strategy of observing your data at the destination, after the data has been decrypted. Once the adversary has done this, there is no point in changing cryptographic keys or shifting to different encryption methods. Plan to use different kinds of tactics, rather than variations on a single theme.

Third, remember that the adversary will rarely attack a stopgap head-on. Instead, he will probably work around it, by finding a tactic that makes it irrelevant. So don't worry too much about how well your stopgap resists direct attack, and don't choose a more expensive stopgap just because it stands up marginally better against direct attacks. If you're throwing an oil slick onto the road in front of your adversary, you needn't worry too much about the quality of the oil.

There are some hopeful signs that the big copyright owners are beginning to use stopgaps more effectively. But their policy prescriptions still reflect a poor understanding of stopgap strategy. In the third and final installment of my musings on speedbumps, I’ll talk about the public policy implications of the speedbump/stopgap approach to copyright enforcement.

Tagged:  

Extreme Branding

Yesterday I saw something so odd that I just can't let it pass unrecorded.

I was on a plane from Newark to Seattle, and I noticed that I was sitting next to Adidas Man. Nearly everything about this guy bore the Adidas brand, generally both the name and the logo. His shirt. His pants. His shoes. His jacket. His suitcase. His watch. His CD player. And – I swear I'm not making this up – his wedding ring. Yes, the broad silver band worn on the fourth finger of his left hand was designed in classic wedding-band style, except for the addition of the Adidas logo, and the letters a-d-i-d-a-s embossed prominently on the outside.

Tagged:  
Syndicate content