Ed Felten's blog

Broadcast Flag for Radio

JD Lasica has an important story about an FCC proposal, backed by the recording industry, to impose a broadcast-flag mandate on the design of digital radios. As JD suggests, this issue deserves much more attention than it has gotten.

He also has copies of correspondence on this issue exchanged between RIAA president Cary Sherman and Consumer Electronics Association (CEA) CEO Gary Shapiro. Shapiro notes that this proposal directly contradicts the RIAA's "Policy Principles on Digital Content," which say this:

Technology and record companies believe that technical protection measures dictated by the government (legislation or regulations mandating how these technologies should be designed, function and deployed, and what devices must do to respond to them) are not practical. The imposition of technical mandates is not the best way to serve the long-term interests of record companies, technology companies, and consumers ... The role of government, if needed at all, should be limited to enforcing compliance with voluntarily developed functional specifications reflecting consensus among affected interests.

The FCC's proposal will be open for public comment between June 16 and July 16.

New Email Spying Tool

A company called didtheyreadit.com has launched a new email-spying tool that is generating some controversy, and should generate more. The company claims that its product lets you invisibly track what happens to email messages you send: how many times they are read; when, where (net address and geographic location), and for how long they are read; how many times they are forwarded, and so on.

The company has two sales pitches. They tell privacy-sensitive people that the purpose is to tell a message’s sender whether the message got through to its destination, as implied by their company name. But elsewhere, they tout the pervasiveness and invisibility of their tracking tool (from their home page: "email that you send is invisibly tracked so that recipients will never know you’re using didtheyreadit").

Alex Halderman and I signed up for the free trial of the service, and sent tracked messages to a few people (with their consent), to figure out how the product works and how it is likely to fail in practice.

The product works by translating every tracked message into HTML format, and inserting a Web bug into the HTML. The Web bug is a one-pixel image file that is served by a web server at didtheyreadit.com. When the message recipient views the message on an HTML-enabled mailer, his viewing software will try to load the web bug image from the didtheyreadit server, thereby telling didtheyreadit.com that the email message is being viewed, and conveying the viewer’s network address, from which his geographic location may be deduced. The server responds to the request by streaming out a file very slowly (about eight bytes per second), apparently for as long as the mail viewer is willing to keep the connection open. When the user stops viewing the email message, his mail viewer gives up on loading the image; this closes the image-download connection, thereby telling didtheyreadit that the user has stopped viewing the message.

This trick of putting Web bugs in email has been used by spammers for several years now. You can do it yourself, if you have a Web site. What's new here is that this is being offered as a conveniently packaged product for ordinary consumers.

Because this is an existing trick, many users are already protected against it. You can protect yourself too, by telling your email-reading software to block loading of remote images in email messages. Some standard email-filtering or privacy-enhancement tools will also detect and disable Web bugs in email. So users of the didtheyreadit product can't be assured that the tracking will work.

It’s also possible to detect these web bugs in your incoming email. If you look at the source code for the message, you’ll see an IMG tag, containing a URL at didtheyreadit.com. Here’s an example:

<img src="http://didtheyreadit.com/index.php/worker?code=e070494e8453d5a233b1a6e19810f" width="1" height="1" />

The code, "e0704…810f" in my example, will be different in each tracked message. You can generate spurious “viewing” of the tracked message by loading the URL into your browser. Or you can put a copy of the entire web bug (everything that is intended above) into a Web page or paste it into an unrelated email message, to confuse didtheyreadit's servers about where the message went.

Products like this sow the seeds of their own destruction, by triggering the adoption of technical measures that defeat them, and the creation of social norms that make their use unacceptable.

Tagged:  

Penn State: No Servers in Dorms

Yesterday I attended the Educause Policy Conference in Washington, where I spoke on a panel on "Sharing Information and Controlling Content: Continuing Challenges for Higher Education."

One of the most interesting parts of the day was a brief presentation by Russ Vaught, the Associate Vice Provost for IT at Penn State. He said that Penn State has a policy banning server software of all kinds from dormitory computers. No email servers; no web servers; no DNS servers; no chat servers; no servers of any kind. The policy is motivated by a fear that server software might be used to infringe copyrights.

This is a wrongheaded policy that undermines the basic educational mission of the university. As educators, we're teaching our students to create, analyze, and disseminate ideas. We like nothing more than to see our students disseminating their ideas; and network servers are the greatest idea-disseminating technology ever invented. Keeping that technology away from our students is the last thing we should be doing.

The policy is especially harmful to computer science students, who would otherwise gain hands-on experience by managing their own computer systems. For example, it's much easier to teach a student about email, and email security, if she has run an email server herself. At Penn State, that can't happen.

The policy also seems to ignore some basic technical facts. Servers are a standard feature of computer systems, and most operating systems, including Windows, come with servers built in and turned on by default. Many homework assignments in computer science courses (including courses I teach) involve writing or running servers.

Penn State does provide a cumbersome bureaucratic process that can make limited exceptions to the server ban, but "only … in the rarest of circumstances" and then only for carefully constrained activities that are part of the coursework in a particular course.

Listening to Mr. Vaught's presentation, and talking privately to a Penn State official later in the day, I got the strong impression that, at times, Penn State puts a higher priority on fighting infringement than on educating its students.

Still More About End-User Liability

At the risk of alienating readers, here is one more post about the advisability of imposing liability on end-users for harm to third parties that results from break-ins to the end-users' computers. I promise this is the last post on this topic, at least for this week.

Rob Heverly, in a very interesting reply to my last post, focuses on the critical question regarding liability policy: who is in the best position to avert harm. Assuming a scenario where an adversary breaks in to Alice's computer, and uses it as a launching pad for attacks that harm Bob, the critical question is whether Alice or Bob is better positioned to prevent the harm to Bob.

Mr. Heverly (I won't call him Rob because that's too close to my hypothetical Bob's name; and it's an iron rule in security discussions that the second party in any example must be named Bob) says that it will always be easier for Bob to protect himself from the attack than for Alice to block the attack by preventing the compromise of her machine. I disagree. It's not that his general rule is always wrong; but I think it will prove to be wrong often enough that one will have to look at individual cases. To analyze a specific case, we'll have to look at a narrow class of attacks, evaluate the effectiveness and cost of Bob's countermeasures against that attack, and compare that evaluation to what we know about Alice's measures to protect herself. The result of such an evaluation is far from clear, even for straightforward attack classes such as spamming and simple denial of service attacks. Given our limited understanding of security technology, I don't think experts will agree on the answer.

So the underlying policy question – whether to hold Alice liable for harm to Bob – depends on technical considerations that we don't yet understand. Ultimately, the right answer may be different for different types of attacks; but drawing complicated distinctions between attack classes, and using different liability rules for different classes, would probably make the law too complicated. At this point, we just don't know enough to mess with liability rules for end-users.

Tagged:  

More on End-User Liability

My post yesterday on end-user liability for security breaches elicited some interesting responses.

Several people debated the legal question of whether end-users are already liable under current law. I don't know the answer to that question, and my post yesterday was more in the nature of a hypothetical than a statement about current law. Rob Heverly, who appears to be a lawyer, says that because there is, in general, no duty to protect strangers from harm, end-users are not liable under current law for harm to others caused by intruders. Others say an unprotected machine may be an attractive nuisance. I'll leave it to the lawyers to duke that one out.

Others objected that it would be unfair to hold liable an end-user, if that user took all reasonable protective steps, or if he failed to take some extra step. To see why this objection might be wrong, consider a hypothetical where an attacker breaks into Alice's machine, and uses it to cause harm to Bob. It seems unfair to make Alice pay for this harm. But the alternative is to leave Bob to pay for it, which may be even more unfair, depending on circumstances. From a theoretical standpoint, it makes sense to send the bill to the party who was best situated to prevent the harm. If that turns out to be Alice, then one can argue that she should be liable for the harm. And this argument is plausible even if Alice has very little power address the harm – as long as Bob has even less power to address it.

Others objected that novice users would be unable to protect themselves. That's true, but by itself it's not a good argument against liability. Imposing liability would cause many novice users to get help, by hiring competent people to manage their systems. If an end-user can spend $N to reduce the expected harm to others by more than $N, then we want them to do so.

Others objected that liability for breaches would be a kind of reverse lottery, with a few unlucky users being hit with large bills, because their systems happened to be used to cause serious harm, while other similarly situated users got off scot-free. The solution to this problem is insurance, which is an effective mechanism for spreading this kind of risk. (Eventually, this might be a standard rider on homeowner's or renter's insurance policies.) Insurance companies would also have the resources to study whether particular products or practices increase or reduce expected liability. They might impose a surcharge on people who use a risky operating system, or provide a discount for the use of effective defensive tools. This, in turn, would give end-users economic incentives to make socially beneficial choices.

Finally, some people responded to my statement that liability might work poorly where harm is diffuse. Seth Finkelstein suggested class actions suits as a remedy. Class actions would make sense where the aggregate harm is large and the victims easy to identify. Rob Heverly suggests that large institutions like companies or universities would be likely lawsuit targets, because their many computers might cause enough harm to make a suit worthwhile. Both are good points, but I still believe that a great deal of harm – perhaps the majority – would be effectively shielded from recovery because of the costs of investigation and enforcement.

Tagged:  

Should End-Users Be Liable for Security Breaches?

Eric Rescorla reports that, in a talk at WEIS, Dan Geer predicted (or possibly advocated) that end-users will be held liable for security breaches in their machines that cause harm to others.

As Eric notes, there is a good theoretical argument for this:

There are two kinds of costs to not securing your computer:

  • Internal costs: the costs to you of having your own machine broken into.
  • External costs: the costs to others of having your machine being broken into, primarily your machine being used as a platform for other attacks.

Currently, the only incentive you currently have is the internal costs. That incentive clearly isn't that strong, as lots of people don't upgrade their systems. The point of liability is to get you to also bear the external costs, which helps give you the right incentive to secure your systems.

Eric continues, astutely, by wondering whether it's actually worthwhile, economically, for users to spend lots of money and effort trying to secure their systems. If the cost of securing your computer exceeds the cost (internal and external) of not doing so, then the optimal choice is simply to accept the cost of breaches; and that's what you'll do, even if you're liable.

There's at least one more serious difficulty with end-user liability. Today, many intrusions into end-user machines lead to the installation of "bots" that the intruder uses later to send spam, launch denial of service attacks, or make other mischief. The harm caused by these bots is often diffuse.

For example, suppose Alice's machine is compromised and the intruder uses it to send 100,000 spam emails, each of which costs its recipient five cents to delete. Alice's insecurity has led to $5,000 of total harm. But who is going to sue Alice? No individual has suffered more than a few cents' worth of harm. Even if all of the affected parties can somehow put together an action against Alice, the administrative and legal costs of the action (not to mention the cost of identifying Alice in the first place) will be much more than $5,000. In aggregate, all of the world's Alices may be causing plenty of harm, but the costs of holding each particular Alice responsible may be excessive.

So, to the extent that the external costs of end-user insecurity are diffuse, end-user liability may do very little good. Maybe there is another way to internalize the external costs of end-user insecurity; but I'm not sure what it might be.

Tagged:  

Florida Voting Machines Mis-recorded Votes

In Miami-Dade County, Florida, an internal county memo has come to light, documenting misrecording of votes by ES&S e-voting machines in a May 2003 election, according to a Matthew Haggman story in the Miami Daily Business Review.

The memo, written by Orlando Suarez, head of the county's Enterprise Technology Services Department, describes Mr. Suarez's examination of the electronic record of the May 2003 election in one precinct. The ES&S machines in question provide two reports at the end of an election. One report, the "vote image report", gives the vote tabulation (i.e., number of votes cast for each candidate) for each voting machine, and the other gives an audit log of significant events, such as initialization of the machine and the casting of a vote (but not who the vote was cast for), for each machine.

Mr. Suarez's examination found that the two records were inconsistent with each other, and that both were inconsistent with reality.

In his memo, Suarez analyzed a precinct where just nine electronic voting machines were used. He first examined the audit logs for all nine machines, which was compiled onto one combined audit log. He found that the audit log made no mention of two of the machines used in the precinct.

In addition, he found that the audit log reported the serial number of a machine that was not used in that precinct. The phantom machine that appeared on the audit showed a count of ballots cast that equaled the count of the two missing machines.

Then he looked at the vote image report that was an aggregate of all nine voting machines. He discovered that three of the machines were not reported in the vote image report. But a serial number for a machine not used in the precinct appeared on the vote image report. That phantom machine showed a vote count equal to the vote count on the two missing machines. The other missing machine showed no activity.

Further examination revealed 38 votes that appeared in the vote image report but not in the audit log.

There is some evidence that the software used in this election was uncertified.

County officials don't see much of a problem here:

Nevertheless, [county elections supervisor Constance] Kaplan insisted that Suarez's analysis did not demonstrate any basic problems with the accuracy of the vote counts produced by the county's iVotronic system. "The Suarez memo has nothing to do with the tabulation process," she said. "It is very annoying that the coalition keeps equating the tabulation function with the audit function."

Maybe I'm being overly picky here, but isn't the vote tabulation supposed to match the audit trail? And isn't the vote tabulation report supposed to match reality?

Very annoying, indeed.

Tagged:  

Microsoft: No Security Updates for Infringers

Microsoft, reversing a previous decision, says it will not provide security updates to unlicensed users of Windows XP. Microsoft is obviously entitled to do this if it wants, since it has no obligation to provide product support to people who didn’t buy the product in the first place. A more interesting question is whether this was the best decision from the standpoint of Microsoft and its existing customers. The answer is far from obvious.

Before I go further, let me make two assumptions clear. First, I’m assuming Microsoft has a reliable way to tell which copies of Windows are legitimate, so that they never deny updates mistakenly to legitimate customers. Second, I’m assuming Microsoft doesn’t care about the welfare of infringers and feels no obligation at all to help them.

Helping infringers could easily hurt Microsoft’s business, if doing so makes infringement a more attractive option. If patches are one of the benefits of buying the product, then people are more likely to buy; but if they can get patches even without buying, some will choose to infringe, thereby costing Microsoft sales.

On the other hand, if there is a sizable population of unpatched infringing copies out there, this hurts Microsoft’s legitimate customers, because an infringing customer might infect a legitimate customer. A large reservoir of unpatched (infringing) machines will aggravate an already serious malware problem, by making Windows an even more attractive target to malware authors, and by speeding the spread of new malware.

But wait, it gets even more complicated. If infringing copies are susceptible to existing malware, then some of the bad guys will be satisfied to reuse old malware, since there is still a population of (infringing) machines it can attack. But if infringing copies are patched, then the bad guys may create more new malware which is not stopped by patches; and this new malware will affect legitimate and infringing copies alike. So refusing to update infringing copies may leave the infringers as decoys who draw fire away from legitimate customers.

There are even more factors in play, but I’ve probably written too much about this already. The effect of all this on Microsoft’s reputation is particularly interesting. Ultimately, I have no idea whether Microsoft made the right choice. And I doubt that Microsoft knows either.

Tagged:  

Valenti Quotes Me

In his testimony at the House DMCA-reform hearing today, Jack Valenti quoted me, in support of a point he wanted to make. The quote comes from last year's Berkeley DRM Conference, from my response to a question asked by Prof. Pam Samuelson. Here's the relevant section from Mr. Valenti's testimony (emphasis in original):

Keep in mind that, once copy protection is circumvented, there is no known technology that can limit the number of copies that can be produced from the original. In a recent symposium on the DMCA, Professor Samuelson of UC Berkeley posed the question: "whether it was possible to develop technologies that would allow…circumvention for fair uses without opening up the Pandora’s Box so that allowing these technologies means that you’re essentially repealing the anti-circumvention laws."

The question was answered by the prominent computer scientist and outspoken opponent of the DMCA, Professor Ed Felton [sic] of Princeton: "I think this is one of the most important technical questions surrounding DRM – whether we know, whether we can figure out how to accommodate fair use and other lawful use without opening up a big loophole. The answer, I think, right now, is that we don’t know how to do that. Not effectively."

Moreover, there is no known device that can distinguish between a “fair use” circumvention and an infringing one. Allowing copy protection measures to be circumvented will inevitably result in allowing anyone to make hundreds of copies – thousands – thereby devastating the home video market for movies. Some 40 percent of all revenues to the movie studios come from home video. If this marketplace decays, it will cripple the ability of copyright owners to retrieve their investment, and result in fewer and less interesting choices at the movie theater.

Here's the full excerpt from the DRM Conference transcript:

Question from Prof. Pam Samuelson:

So yesterday when I was doing the tutorial, Alex Alben asked me a question which, because I'm not a technologist, I was not in a very good position to try to answer, but since there are several technologists on this panel who are interested in information flows. The question that was put to me was a question about whether it was possible to develop technologies that would allow circumvention for fair use or other non-infringing purposes. Is it possible to sort of think creatively about anti-circumvention laws that might allow some room for circumvention for fair uses without opening up the Pandora's box so that allowing these technology means that you've essentially repealed the anti-circumvention laws.

[Other panelists' answers omitted.]

Answer by Ed Felten:

I think this is one of the most important technical questions around DRM, whether we know, whether we can figure out how to accommodate fair use and other lawful use without opening up a big loophole. And the answer is, I think, right now, is that we don't know how to do that. Not effectively. A lot of people would like to know whether we can do that or how we go about doing it, but it's a big open question right now.

Let's leave aside for now the flaws in Mr. Valenti's argument, and focus just on his use of the quote. Note that he artfully excerpts segments from Prof. Samuelson's question, to make it appear that she asked a different question than she really did. Also note that he removes an important part of my answer: the last sentence, where I talk about the technological relation between DRM and fair use as being a "big open question".

Which brings us back to the bill being discussed today. If we want to answer the "big open question" I mentioned, we need to do more research. But the DMCA severely limits some of the key research that we would need to do. The Boucher-Doolittle bill would open the door to this research, by creating a research exemption to the DMCA. But that issue is apparently not up for discussion today.

[Note: This post is based on Mr. Valenti's written testimony, of which I have a copy. I did not hear his live testimony. Seth Finkelstein reports that Mr. Valenti did use the quote in his oral testimony.]

Tagged:  

House DMCA Reform Hearing Today

Today a congressional committee will hold a hearing on the Boucher-Doolittle bill (H.R. 107), known as the DMCRA, that would reform the DMCA. The hearing will be webcast, starting at about 10:00 AM Eastern. Look here for a witness list and link to the webcast.

The DMCRA would do four main things: require labeling of copy-protected CDs; allow circumvention of DRM for non-infringing purposes; allow the distribution of DRM-circumvention tools that enable fair use; and create an exemption to the DMCA for legitimate research.

Based on the witness list and other hints I have gotten, it appears that the hearing will focus on the consumer provisions of the bill. There probably won't be much discussion of the much-needed research exemption.

Tagged:  
Syndicate content