Ed Felten's blog

Princeton Faculty Passes Grade Quota

Yesterday the Princeton faculty passed the proposed grade inflation resolution (discussed here), establishing a quota on A-level grades. From now on, no more than 35% of the course grades awarded by any department may be A-level grades, and no more than 55% of independent work grades may be A-level.

I had to miss the meeting due to travel, so I can't report directly on the debate at the faculty meeting. I'll update this post later if I hear anything interesting about the debate.

Tagged:  

What is a Speedbump?

One thing I learned at the Harvard Speedbumps conference is that many people agree that "speedbump DRM" is a good idea; but they seem to have very different opinions of what "speedbump DRM" means. (The conference was declared "off the record" so I can't attribute specific opinions to specific people or organizations.)

One vision of speedbump DRM tries to delay the leakage of DRM'ed content onto the darknet (i.e., onto open peer-to-peer systems where they're available to anybody). By delaying this leakage for long enough, say for three months, this vision tries to protect a time window in which a copyrighted work can sold at a premium price.

The problem with this approach is that it assumes that you can actually build a DRM system that will prevent leakage of the content for a suitable length of time. So far, that has not been the case – not even close. Most DRM systems are broken within hours, or a within few days at most. And even if they're not broken, the content leaks out in other ways, by leaks in the production process or via the analog hole. Once content is available on the darknet, DRM is nearly useless, since would-be infringers will ignore the DRM'ed content and get unconstrained copies from the darknet instead.

In any case, this approach isn't really trying to build a speedbump, it's trying to build a safe. (Even top-of-the-line office safes can only stand up to skilled safecrackers for hours.) A speedbump does delay passing cars, but only briefly. A three-month speedbump isn't really a speedbump at all.

A real speedbump doesn't stop drivers from following a path that they're deterrmined to follow. Its purpose, instead, is to make one path less convenient than another. A speedbump strategy for copyright holders, then, tries to make illegal acquisition of content (via P2P, say) less convenient than the legitimate alternative.

There are several methods copyright owners can (and do) use to frustrate P2P infringers. Copyright owners can flood the P2P systems with spoofed files, so that users have to download multiple instances of file before they get a real one. They can identify P2P uploaders offering copyrighted files, and send them scary warning messages, to reduce the supply of infringing files. These methods make it harder for P2P users to get the copyrighted files they want – they acts as speedbumps.

These kinds of speedbumps are very feasible. They can make a significant difference, if they're coupled with a legitimate alternative that's really attractive. And if they're done carefully, these measures have the virtue of inflicting little or no pain on noninfringers.

From an analytical, information security viewpoint, looking for speedbumps rather than impregnable walls requires us to think differently. How exactly we must change our thinking, and how the speedbump approach impacts public policy, are topics for another day.

Tagged:  

How Much Information Do Princeton Grades Convey?

One of the standard arguments against grade inflation is that inflated grades convey less information about students' performaces to employers, graduate schools, and the students themselves.

In light of the grade inflation debate at Princeton, I decided to apply information theory, a branch of computer science theory, to the question of how much information is conveyed by students' course grades. I report the results in a four-page memo, in which I conclude that Princeton grades convey 11% less information than they did thirty years ago, and that imposing a 35% quota on A-level grades, as Princeton is proposing doing, would increase the information content of grades by 10% at most.

I'm trying to convince the Dean of the Faculty to distribute my memo to the faculty before the Monday vote on the proposed A quota.

Today's Daily Princetonian ran a story, by Alyson Zureick, about my study.

Tagged:  

California Panel Recommends Decertifying One Diebold System

The State of California's Voting Systems Panel has voted to recommend the decertification of Diebold's TSx e-voting system, according to a release from verifiedvoting.org. The final decision will be made by Secretary of State Kevin Shelley, but he is expected to approve the recommendation within the next week.

The TSx is only one of the Diebold e-voting systems used in California, but this is still an important step.

Tagged:  

Copyright and Cultural Policy

James Grimmelmann offers another nice conference report, this time from the Seton Hall symposium on "Peer to Peer at the Crossroads". I had expressed concern earlier about the lack of technologists on the program at the symposium, but James reports that the lawyers did just fine on their own, steering well clear of the counterfactual technology assumptions one sometimes sees at lawyer conferences.

Among other interesting bits, James summarizes Tim Wu's presentation, based on a recent paper arguing that much of what passes for copyright policy is really just communications policy in disguise.

We're all familiar, by now, with the argument that expansive copyright is bad because it's destructive to innovation and allows incumbent copyright industries to prevent the birth of new competitors. Content companies tied to old distribution models are, goes this argument, strangling new technologies in their crib. We're also familiar, by now, with the argument that changes in technology are destroying old, profitable, and socially-useful business, without creating anything stable, profitable, or beneficial in their place. In this strain of argument, technological Boston Stranglers roam free, wrecking the enormous investments that incumbents have made and ruining the incentives for them to put the needed money into building the services and networks of the future.

Tim's insight, to do it the injustice of a sound-bite summarization, is that these are not really arguments that are rooted in copyright policy. These are communications policy arguments; it just so happens that the relevant which happens to affect communications policy is copyright law. Where in the past we'd have argued about how far to turn the "antitrust exemption for ILECs" knob, or which "spectrum auction" buttons to push, now we're arguing about where to set the "copyright" slider for optimal communications policy. That means debates about copyright are being phrased in terms of a traditional political axis in communications law: whether to favor vertically-integrated (possibly monopolist) incumbents who will invest heavily because they can capture the profits from their investments, or to favor evolutionary competition with open standards in which the pressure for investment is driven by the need to stay ahead of one's competitors.

The punch line: right now, our official direction in communications policy is moving towards the latter model. The big 1996 act embraced these principles, and the FCC is talking them up big time. Copyright, to the extent that it is currently pushing towards the former model, is pushing us to a communications model that flourished in decades past but is now out of favor.

This is a very important point, because the failure to see copyright in the broader context of communications policy has been the root cause of many policy errors, such as the FCC's Broadcast Flag ruling.

I would have liked to attend the Seton Hall symposium myself, but I was at the Harvard Speedbumps conference that day. And I would have produced a Grimmelmann-quality conference report – really I would – but the Harvard conference was officially off-the-record. I'll have more to say in future posts about the ideas discussed at the speedbumps conference, but without attributing them to any particular people.

Tagged:  

Another Form of Grade Inflation

You may recall Princeton's proposal to fight grade inflation by putting a quota on the number of A's that can be awarded. Joe Barillari made a brilliant followup proposal in yesterday's Daily Princetonian, to fight the "problem" of inflation in students' ratings of their professors' teaching.

Tagged:  

Diebold Misled Officials about Certification

Diebold Election Systems knowingly used uncertified software in California elections, despite warnings from its lawyers that doing so was illegal and might subject the company to criminal sanctions and decertification in California, according to Ian Hoffman's story in the Oakland Tribune.

The story says that Diebold made false representations about certification to state officials:

The drafts [of letters to the state] show [Diebold's lawyers] staked out a firm position that a critical piece of Diebold's voting system – its voter-card encoders – didn't need national or state approval because they were commercial-off-the-shelf products, never modified by Diebold.

But on the same day the letter was received, Diebold-hired techs were loading non-commercial Diebold software into voter-card encoders in a West Sacramento warehouse for shipment to Alameda and San Diego counties.

Many of these encoders failed on election day, causing voters to be turned away from the polls in San Diego and Alameda Counties.

This brings Diebold one step closer to being decertified in California:

"Diebold may suffer from gross incompetence, gross negligence. I don't know whether there's any malevolence involved," said a senior California elections official who spoke on condition of anonymity. "I don't know why they've acted the way they've acted and the way they're continuing to act. Notwithstanding their rhetoric, they have not learned any lessons in terms of dealing with this secretary (of state)."

California voting officials will discuss Diebold's behavior at a two-day hearing that starts today.

[link via Dan Gillmor]

Tagged:  

Industry to Sue Supernode Operators?

Rumor has it that the recording industry is considering yet another tactic in their war on peer-to-peer filesharing: lawsuits against people whose computers act as supernodes.

Supernodes are a feature of some P2P networks, such as the FastTrack network used by Kazaa and Grokster. Supernodes act as hubs for the P2P network, helping people find the files they search for. (Once a user finds the desired file, that file is downloaded directly from the machine that has it.)

The industry tried suing the makers of Kazaa and Grokster, but the judge ruled that these P2P companies could not be punished because, unlike Napster, they did not participate in acts of infringement. In Napster, every search involved the participation of server machines that were run by Napster itself. In FastTrack networks, the same role is played by the supernodes, which are not run by the P2P vendor.

A supernode is just an ordinary end-user's computer. The P2P software causes a user's computer to "volunteer" to be a supernode, if the computer is fast and has a good network connection. The user may not know that his computer is a supernode. Indeed, he may not even know what a supernode is.

The likely theory behind a lawsuit would be that a supernode is participating in acts of infringement, just as Napster did, and so it should be held responsible as a contributory and/or vicarious infringer, just as Napster was. Regardless of the legalities, many people would think such lawsuits unfair, because at least some of the defendants would be unaware of their role as supernodes.

Perhaps the real goal of the lawsuits would be to convince people not to act as supernodes. Most of the P2P applications have a "don't be a supernode" configuration switch. If people understood that they could avoid lawsuits by using this switch, many would.

On the other hand, the industry had hoped that the existing lawsuits against P2P direct infringers would convince people to use the "don't upload files" configuration switch on their P2P software, even if they still use P2P for downloading. (It's not that downloading is legal, or that the industry doesn't object to it. It's just that it's much easier to catch uploaders than downloaders, and the industry's suits thus far have been against uploaders.)

The lawsuits have been effective in teaching people that unauthorized filesharing is almost always illegal and carries potentially serious penalties. They have been far less effective, I think, in enticing people to turn off the upload feature in their P2P software. Getting people to turn off the supernode feature seems even harder.

The main effect of suits against supernode operators would be to confuse ordinary users about the law, which can't be in the industry's best interest. If they're going to file suits against P2P users, going after direct infringers looks like the best strategy.

Tagged:  

Cyber-Security Research Undersupported

Improving cybersecurity is supposedly a national priority in the U.S., but after reading Peter Harsha's report on a recent meeting of the President's Information Technology Advisory Committee (PITAC), it's clear that cybersecurity research is severely underfunded.

Here's a summary: The National Science Foundation has very little security research money, enough to fund 40% or less of the research that NSF thinks deserves support. Security research at DARPA (the Defense department's research agency) is gradually being classified, locking out many of the best researchers and preventing the application of research results in the civilian infrastructure. The Homeland Security department is focusing on very short term deployment issues, to the near-exclusion of research. And corporate research labs, which have shrunk drastically in recent years, do mostly short term work. There is very little money available to support research with a longer term (say, five to ten year) payoff.

Tagged:  

A Perfectly Compatible Form of Incompatibility

Scientific American has published an interview with Leonardo Chiariglione, the creator of the MP3 music format and formerly head of the disastrous Secure Digital Music Initiative. (SDMI tried to devise a standard for audio content protection. The group suffered from serious internal disagreements, and it finally dissolved after a failed attempt to use DMCA lawsuit threats to suppress publication of a research paper, by my colleagues and me, on the weaknesses of the group's technology.)

Now Chiariglione is leading another group to devise the ultimate DRM (i.e., anti-copying) music format: "a system that guarantees the protection of copyrights but at the same time is completely transparent and universal." He doesn't seem to see that this goal is self-contradictory. After all, we already have a format that is completely transparent and universal: MP3.

The whole point of DRM technology is to prevent people from moving music usefully from point A to point B, at least sometimes. To make DRM work, you have to ensure that not just anybody can build a music player – otherwise people will build players that don't obey the DRM restrictions you want to connect to the content. DRM, in other words, strives to create incompatibility between the approved devices and uses, and the unapproved ones. Incompatibility isn't an unfortunate side-effect of deficient DRM systems – it's the goal of DRM.

A perfectly compatible, perfectly transparent DRM system is a logical impossibility.

The idea of universally compatible DRM is so odd that it's worth stopping for a minute to try to understand the mindset that led to it. And here Chiariglione's comments on MP3 are revealing:

[Scientific American interviewer]: Wasn't it clear from the beginning that MP3 would be used to distribute music illegally?

[Chiariglione]: When we approved the standard in 1992 no one thought about piracy. PCs were not powerful enough to decode MP3, and internet connections were few and slow. The scenario that most had in mind was that companies would use MP3 to store music in big, powerful servers and broadcast it. It wasn't until the late '90s that PCs, the Web and then peer-to-peer created a completely different context. We were probably naive, but we didn't expect that it would happen so fast.

The attitude of MP3's designers, in other words, was that music technology is the exclusive domain of the music industry. They didn't seem to realize that customers would get their own technology, and that customers would decide for themselves what technology to build and how to use it. The compatible-DRM agenda is predicated on the same logical mistake, of thinking that technology is the province of a small group that can gather in a room somewhere to decide what the future will be like. That attitude is as naive now as it was in the early days of MP3.

Tagged:  
Syndicate content