Ed Felten's blog

RIAA Blowing Smoke About INDUCE Act

Today's New York Times runs a brief story by Matt Richtel and Tom Zeller, Jr. on the growing criticism of Sen. Hatch's INDUCE Act (now given a less bizarre name, and a new acronym, IICA).

Sellers of clearly legitimate products, such as those in telecom and electronics industries, argue that the bill is too broad.

The RIAA shoots back with this:

But Mitch Bainwol, chief executive of the Recording Industry Association of America, a recording industry lobbying group, said the legislation was meant to be narrowly tailored to address companies that build technology focused on illegal file sharing.

The RIAA is just wrong here. There is nothing in the bill that limits it to companies. There is nothing that limits it to technology. There is nothing that limits it to file sharing. Any of those limits could have been written into the bill – but they weren't. The language of the bill is deliberately broad, and it appears to be deliberately vague as well.

Advocates of the Act have said little if anything to justify its breadth. This will be a key issue in the debate over the bill, if any serious debate is allowed to occur.

Tagged:  

The Future of Filesharing

Today there's a Senate hearing on "The Future of P2P". On Saturday, I gave a talk with a remarkably similar title, "The Future of Filesharing," at the ResNet 2004 conference, a gathering of about 400 people involved in running networks for residential colleges and universities. Here's a capsule summary of my talk.

(Before starting, a caveat. Filesharing technologies have many legitimate, non-infringing uses. When I say "filesharing" below, I'm using that term as a shorthand to refer to infringing uses of filesharing systems. Rather than clutter up the text below with lots of caveats about legitimate noninfringing uses, let's just put aside the noninfringing uses for now. Okay?)

From a technology standpoint, the future of filesharing will involve co-evolution between filesharing technology on one side, and anti-copying and anti-filesharing technology on the other. By "co-evolution" I mean that whenever one side finds a successful tactic, the other side will evolve to address that tactic, so that each side catalyzes the evolution of the other side's technology.

The resulting arms race favors the filesharing side, for two reasons. First, the filesharing side can probably adapt faster than the anti-filesharing side; and speed is important in this kind of move-countermove game. Second, the defensive technologies that filesharing systems are likely to adapt are the same defensive technologies used in ordinary commercial distributed systems (end-to-end encryption, anti-denial of service tactics, reputation systems, etc.), so the filesharing side can benefit from the enormous existing R&D efforts on defensive technologies.

Given all of this, it's a mistake for universities or ISPs to spends lots of money and effort trying to develop or deploy the One True Solution Technology (OTSS). Co-evolution ensures that the OTSS would sow the seeds of its own destruction, by motivating filesharing designers and users to change their systems and behavior to defeat it. At best, the OTSS would buy a little time – but not much time, given the quick reaction time of the other side. Rather than an OTSS, a series of quick-and-dirty measures might have some effect, and at least would waste fewer resources fighting a losing battle.

The best role for a university in the copyright wars is to do what a university does best: educate students. When I talk about education, I don't mean a five-minute lecture at freshman initiation. I don't mean adding three paragraphs on copyright to that rulebook that nobody reads. I don't mean scare tactics. What I do mean is a real, substantive discussion of the copyright system.

My experience is that students are eager to have serious, intellectual discussions about why we have the copyright system we have. They will take seriously the economic justification for copyright, if it is explained to them in a non-hysterical way. They'll appreciate the wisdom of the limitations on copyright, such as fair use and the idea/expression dichotomy; and in so doing they'll realize why there are not exceptions for other things.

This kind of education is expensive; but all good education is. Surely, amid all of the hectoring "education" campaigns, there is room for some serious education too.

Tagged:  

Tech Giants Support DMCA Reform

Big tech companies, including Intel and Sun Microsystems, and ISPs, including Verizon and SBC, will announce today that they have banded together to form the "Personal Technology Freedom Coalition," to support Rep. Rick Boucher's DMCRA bill (HR 107) to reform the DMCA, according to a Declan McCullagh story at news.com.

The Boucher bill would reform the DMCA to allow the distribution and use of circumvention technologies for non-infringing purposes. (As written, the DMCA bans even circumventions that don't result in copyright infringement.) The bill would also create an exemption to the DMCA for legitimate research.

This bill has always been in the interests of technologists. The overbreadth of the DMCA has restrained both research and development of innovative, noninfringing uses of technology. The whole tech community – including users – would benefit from a narrowing of the DMCA.

So far, technology companies have been a bit shy about expressing their support for the Boucher bill, apparently out of a desire not to offend copyright maximalists. It's good news that these companies are now willing to stand up for their interests and the interests of their customers.

I'm sure we'll be hearing more about the Boucher bill in the coming weeks.

Tagged:  

Voting News

The League of Women Voters last week rescinded its support for paperless e-voting machines. The decision was driven by grassroots support among the League's members, overriding a previous policy that was, according to rumor, decreed originally by a single member of the League's staff. (I can't find this story on the public part of the League's site, but it comes from a reliable source, so I'm pretty sure it's true.)

Also, tomorrow, June 22, there will be a rally in Washington for supporters of voter-verifiable paper trails. The rally runs from 11:45 until 1:00, on Cannon Terrace, just south of the U.S. Capitol, between the Cannon and Longworth House Office Buildings. (Metro stop: Capitol South; enter at the corner of New Jersey and Independence) Speakers include Rep. Rush Holt and other members of Congress.

Tagged:  

Lame Copy Protection Doesn't Depress CD Sales Much

A CD "protected" by the SunnComm anti-copying technology is now topping the music charts. This technology, you may recall, was the subject of a paper by Alex Halderman. The technology presents absolutely no barrier to copying on some PCs; on the remaining PCs, it can be defeated by holding down the Shift key when inserting the CD.

SunnComm execs say that this demonstrates consumer acceptance of their technology. A quick look at the consumer reviews at Amazon tells the real story: the technology causes significant problems for some law-abiding customers, and many customers dislike it. Many customers find the technology bearable only because it is so easily defeated, thereby allowing customers who, say, want to download songs from the album onto their iPods a way to do so.

Alex Halderman reports receiving at least three unsolicited emails this week thanking him for explaining how consumers can stop the SunnComm technology from impeding their fair use of this album. Here's one:

Hello,

Thanks for the great article on this topic. I just bought the new Velvet Revolver CD and was not able to listen to it on my computer or import it into my iTunes program. I did use their "Copy" option which saved the files as Windows Media Files but these couldn't be converted by iTunes. Well this is not acceptable and within about 5 minutes I was able to find your article and disable the lame driver.

Keep up the great work!

Another, in addition to discussing the fair use issue, says this:

If I wasn't such a fan of this band, I would have taken the CD back in protest. But alas, it's the only way to be legal and I wish for the artist to reap their financial benefits.

Needless to say, the SunnComm technology has not kept the songs on this album off of the filesharing systems.

Tagged:  

Hatch to Introduce INDUCE Act

Fred von Lohmann at EFF Deep Links reports that Sen. Orrin Hatch is planning to introduce, possibly today, a bill to create a new form of indirect liability for copyright infringement. The full name of the bill is somewhat bizarre: the "Inducement Devolves into Unlawful Child Exploitation Act".

Not being a lawyer, I can't immediately say what impact this bill would have. But Fred von Lohmann, a very smart copyright lawyer, sees it as a threat to innovation, and Ernest Miller, who is also well versed in copyright law, uses me as an example of a person whose legitimate activities might be threatened by the bill. That's definitely not the kind of thing I wanted to read over breakfast.

We'll have to see how the Hatch bill is received. If it passes, it looks like computer security research may become even more of a legal minefield than it already is.

FTC: Do-Not-Email List Won't Help

Yesterday the Federal Trade Commission released its recommendation to Congress regarding the proposed national Do Not Email list. They recommended against the creation of such a list at the present time, because the list would provide little or no reduction in spam, but would increase costs for legitimate emailers and might raise security risks.

Congress, in the CAN-SPAM Act, asked the FTC to study the feasibility of instituting a national Do Not Email list, akin to the popular Do Not Call list. Yesterday's FTC recommendation is the result of the FTC's study.

The FTC relied on interviews with many people, and it retained three security experts – Matt Bishop, Avi Rubin, and me – to provide separate reports on the technical issues regarding the Do Not Email list. My report supported the action that the FTC ultimately took, and I assume that the other two reports did too.

I understand that the three expert reports will be released by the FTC, but I haven't found them on the FTC website yet. I'll post a link to my report when I find one.

Tagged:  

Off the Grid?

I'll be in a place with a possibly iffy Internet link until Monday evening. If you don't hear from me in the next few days, I'm probably incommunicado; but please tune back in on Tuesday.

Tagged:  

Rubin and Rescorla on E-Voting

There are two interesting new posts on e-voting over on ATAC.

In one post, Avi Rubin suggests a "hacking challenge" for e-voting technology: let experts tweak an existing e-voting system to rig it for one candidate, and then inject the tweaked system quietly into the certification pipeline and see if it passes. (All of this would be done with official approval and oversight, of course.)

In the other post (also at Educated Guesswork, with better comments), Eric Rescorla responds to Clive Thompson's New York Times Magazine piece calling for open e-voting software. Thompson invoked the many-eyeballs phenomenon, saying that open software gets the benefit of inspection by many people, so that opening e-voting software would help to find any security flaws in it.

Eric counters by making two points. First, opening software just creates the opportunity to audit, but it doesn't actually motivate skilled people to spend a lot of their scarce time doing a disciplined audit. Second, bugs can lurk in software for a long time, even in code that experts look at routinely. So, Eric argues, instituting a formal audit process that has real teeth will do more good than opening the code.

While I agree with Eric that open source software isn't automatically more secure than closed source, I suspect that voting software may be the exceptional case where smart people will volunteer their time, or philanthropists will volunteer their money, to see that a serious audit actually happens. It's true, in principle, that the same audit can happen if the software stays closed. But I think it's much less likely to happen in practice with closed software – in a closed-source world, too many people have to approve the auditors or the audit procedures, and not all of those people will want to see a truly fair and comprehensive audit.

Eric also notes, correctly, the main purpose of auditing, which is not to find all of the security flaws (a hopeless task) but to figure out how trustworthy the software is. To me, the main benefit of opening the code is that the trustworthiness of the code can become a matter of public debate; and the debate will be better if its participants can refer directly to the evidence.

Tagged:  

Google Hires Ph.D.'s; Times Surprised

Yesterday's New York Times ran a story by Randall Stross, marveling at the number of Ph.D.'s working at Google. Indeed, the story marveled about Google wanting to hire Ph.D.'s at all. Many other companies shun Ph.D.'s.

Deciding whether to hire bachelors-level employees or Ph.D.'s really boils down to whether you want employees who are good at doing homework on short deadlines or employees who are good at figuring things out in an unstructured environment. (Like all generalizations, this is true only on average. There are plenty of outliers.) Google is a bit unusual in opting for the latter.

What the article doesn't say is that Google does not hire just anybody with a Ph.D. diploma. They're pretty careful about which Ph.D.'s they hire. Google can afford to be choosy since so many people seem to want to work there. Google benefits from a virtuous cycle that sometimes develops at a company, where the company has an unusual concentration of really smart employees, so lots of people want to work there, so the company can be very picky about whom it hires, thus allowing itself to hire more very smart people.

The article also hints at Google's success in integrating research with production. The usual model in the industry is to hire a small number of eggheads and send them off to some distant building to Think Deep Thoughts, so as not to disturb the mass of employees who make products. By contrast, Google generally uses the very same people to do research and create products. They do this by letting every employee spend 20% of their time doing anything they like that might be useful to the company. Doing this ensures that the research is done by people who understand the problems that come up in the company's day-to-day business.

Sustaining this model requires at least three things. First, you have to have employees who will use the unstructured research time productively; this is where the Ph.D.'s and other very smart people come in. Second, you need to maintain a work environment that is attractive to these people, because they'll have no trouble finding work elsewhere if they want to leave. Third, management has to have the discipline to avoid constantly canceling the 20% research time in order to meet the deadline du jour.

Google does all of this well. They probably benefit also from the nature of their product, which generates revenue every time it is used (rather than only when customers decide to pay for an upgrade), and which can be improved incrementally. Revenue doesn't depend on cranking out each year a version that can be sold as all-new, so the company can focus simply on making its products work well.

Can Google maintain all of this after it has gone public? My guess is that it can, as long as it is viewed as the technology leader in a lucrative area. If Google ever loses its aura, though, watch out – when the green eyeshades come out, many of those smart employees will leave for greener pastures, probably for a company that bills itself as "the new Google."

Tagged:  
Syndicate content