All Posts

This page shows all posts from all authors. Selected posts also appear on the front page.

DoJ To Divert Resources to P2P Enforcement

Last week the Department of Justice issued a report on intellectual property enforcement. Public discussion has been slow to develop, since the report seems to be encoded in some variant of the PDF format that stops many people from reading it. (I could read it fine on one of my computers, but ran into an error message saying the file was encrypted on the rest of my machines. Does anybody have a non-crippled version?)

The report makes a strong case for the harmfulness of intellectual property crimes, and then proceeds to suggest some steps to strengthen enforcement. I couldn't help noticing, though, that the enforcement effort is not aimed at the most harmful crimes cited in the report.

The report leads with the story of a criminal who sold counterfeit medicines, which caused a patient to die because he was not taking the medicines he (and his doctors) thought he was. This is a serious crime. But what makes it serious is the criminal's lying about the chemical composition of the medicines, not his lying about their brand name. This kind of counterfeiting is best treated as an attack on public safety rather than a violation of trademark law.

(This is not to say that counterfeiting of non-safety-critical products should be ignored, only that counterfeiting of safety-critical products can be much more serious.)

Similarly, the report argues that for-profit piracy, mostly of physical media, should be treated seriously. It claims that such piracy funds organized crime, and it hints (without citing evidence) that physical piracy might fund terrorism too. All of which argues for a crackdown on for-profit distribution of copied media.

But when it comes to action items, the report's target seems to shift away from counterfeiting and for-profit piracy, and toward P2P file sharing. Why else, for example, would the report bother to endorse the Induce Act, which does not apply to counterfeiters or for-profit infringers but only to the makers of products, such as P2P software, that merely allow not-for-profit infringement?

It's hard to believe, in today's world, that putting P2P users in jail is the best use of our scarce national law-enforcement resources. Copyright owners can already bring down terrifying monetary judgments on P2P infringers. If we're going to spend DoJ resources on attacking IP crime, let's go after counterfeiters (especially of safety-critical products) and large-scale for-profit infringers. As Adam Shostack notes, to shift resources to enforcing less critical IP crimes, at a time when possible-terrorist wiretaps go unheard and violent fugitive cases go uninvestigated, is to lose track of our priorities.

Tagged:  

Fast-Forwarding Becomes a Partisan Issue

Remember when I suggested that Republicans might be more prone to copyright sanity than Democrats? Perhaps I was on to something. Consider a recent Senate exchange that was caught by Jason Schultz and Frank Field.

Senator John McCain (Republican from Arizona) has placed a block on two copyright-expansion bills, H.R. 2391 and H.R. 4077, because they contain language implying that it's not legal to fast-forward through the commercials when you're watching a recorded TV show. McCain says he won't unblock the bills unless the language is removed. (As I understand it, the block makes it extremely difficult to bring the bill up for a vote.)

Sen. Patrick Leahy (Democrat from Vermont) responded by blasting McCain, saying he had blocked the bill for partisan reasons. Here's Leahy:

In blocking this legislation, these Republicans are failing to practice what they have so often preached during this Congress. For all of their talk about jobs, about allowing the American worker to succeed, they are now placing our economy at greater risk through their inaction. It is a failure that will inevitably continue a disturbing trend: our economy loses literally hundreds of billions of dollars every year to various forms of piracy.

Instead of making inroads in this fight, we have the Republican intellectual property roadblock.

Do the Democrats really want to be known as the party that would ban fast-forwarding?

Tagged:  

Another Broken Diebold Protocol

Yesterday I wrote about a terribly weak security protocol in the Diebold AccuVote-TS system (at least as it existed in 2002), as reported in a talk by Dan Wallach. That wasn't the only broken Diebold protocol Dan discussed. Here's another one which may be even scarier.

The Diebold system allows a polling place administrator to use a smartcard to control a voting machine, performing operations such as closing the polls for the day. The administrator gets a special administrator smartcard (a credit-card-sized computing device) and puts it into the voting machine. The machine uses a special protocol to validate the card, and then accepts commands from the administrator.

This is a decent plan, but Diebold botched the design of the protocol. Here's the protocol they use:

terminal to card: "What kind of card are you?"
card to terminal: "Administrator"
terminal to card: "What's the password?"
card to terminal: [Value1]
terminal to user: "What's the password?"
user to terminal: [Value2]

If Value1=Value2, then the terminal allows the user to execute administrative commands.

Like yesterday's protocol, this one fails because malicious users can make their own smartcard. (Smartcard kits cost less than $50.) Suppose Zeke is a malicious voter. He makes a smartcard that answers "Administrator" to the first question and (say) "1234" to the second question. He shows up to vote, signs in, goes into the voting booth, and inserts his malicious smartcard. The malicious smartcard tells the machine that the secret password is 1234; when the machine asks Zeke himself for the secret password, he enters 1234. The machine will then execute any administrative command Zeke wants to give it.
For example, he can tell the machine that the election is over.

This system was apparently used in the Georgia 2002 election. Has Diebold fixed this problem, or the one I described yesterday? We don't know.

UPDATE (1:30 PM): Just to be clear, telling a machine that the election is over is harmful because it puts the machine in a mode where it won't accept any votes. Getting the machine back into vote-accepting mode, without zeroing the vote counts, will likely require a visit from a technician, which could keep the voting machine offline for a significant period. (If there are other machines at the same precinct, they could be targeted too.) This attack could affect an election result if it is targeted at a precinct or a time of day in which votes are expected to favor a particular candidate.

Tagged:  

Bad Protocol

Dan Wallach from Rice University was here on Monday and gave a talk on e-voting. One of the examples in his talk was interesting enough that I thought I would share it with you, both as an introductory example of how security analysts think, and as an illustration of how badly Diebold botched the design of their voting system.

One of the problems in voting system design is making sure that each voter who signs in is allowed to vote only once. In the Diebold AccuVote-TS system, this is done using smartcards. (Smartcards are the size and shape of credit cards, but they have tiny computers inside.) After signing in, a voter would be given a smartcard – the "voter card" – that had been activated by a poll worker. The voter would slide the voter card into a voting machine. The voting machine would let the voter cast one vote, and would then cause the voter card to deactivate itself so that the voter couldn't vote again. The voter would return the deactivated voter card after leaving the voting booth.

This sounds like a decent plan, but Diebold botched the design of the protocol that the voting terminal used to talk to the voter card. The protocol involved a series of six messages, as follows:

terminal to card: "My password is [8 byte value]"
card to terminal: "Okay"
terminal to card: "Are you a valid card?"
card to terminal: "Yes."
terminal to card: "Please deactivate yourself."
card to terminal: "Okay."

Can you spot the problem here? (Hint: anybody can make their own smartcard that sends whatever messages they like.)

As most of you probably noticed – and Diebold's engineers apparently did not – the smartcard doesn't actually do anything surprising in this protocol. Anybody can make a smartcard that sends the three messages "Okay; Yes; Okay" and use it to cast an extra vote. (Do-it-yourself smartcard kits cost less than $50.)

Indeed, anybody can make a smartcard that sends the three-message sequence "Okay; Yes; Okay" over and over, and can thereby vote as many times as desired, at least until a poll worker asks why the voter is spending so long in the booth.

One problem with the Diebold protocol is that rather than asking the card to prove that it is valid, the terminal simply asks the card whether it is valid, and accepts whatever answer the card gives. If a man calls you on the phone and says he is me, you can't just ask him "Are you really Ed Felten?" and accept the answer at face value. But that's the equivalent of what Diebold is doing here.

This system was apparently used in a real election in Georgia in 2002. Yikes.

Tagged:  

Experimental Use Exception Evaporating?

Doug Tygar points to a front-page article in yesterday's Wall Street Journal about a lawsuit that raises troubling questions about researchers' ability to use patented technologies for experimental purposes.

Patent law, which makes it illegal to make or use a patented invention without permission of the patent owner, has an exception for experimental use. The exception, as I understand it, applies only to non-commercial, curiosity-driven experiments.

John Madey invented, and patented, an important technology called the free-electron laser (FEL). He was a professor at Duke University, where he headed an FEL laboratory. Then he was ousted after a nasty squabble with Duke, and he moved to another university. Duke continued to operate the FEL.

Madey sued Duke for patent infringement, for using the FEL without his permission. Duke wrapped itself in the experimental use exception, but Madey argued that Duke, in its use of the FEL, was not engaged in idle inquiry but was carrying on its business of research and education. The Federal Circuit Court of Appeals agreed with Madey that Duke was not eligible for the exception:

Our precedent clearly does not immunize use that is in any way commercial in nature. Similarly, our precedent does not immunize any conduct that is in keeping with the alleged infringer's legitimate business, regardless of commercial implications. For example, major research universities, such as Duke, often sanction and fund research projects with arguably no commercial application whatsoever. However, these projects unmistakably further the institutions' legitimate business objectives, including educating and enlightening students and faculty participating in these projects. These projects also serve, for example, to increase the status of the institution and lure studentss, faculty, and lucrative research grants.

It's hard to see, in light of this decision, how anybody could ever qualify for the experimental use exception.

If this decision stands, it could have a big impact on university researchers. Up to now, researchers have been free to concentrate on discovery rather than patent negotiations, and to build and use whatever equipment was necessary for their experiments without worrying that somebody would sue to shut down their labs. Now that may have to change change.

Here's a tip for law students: current trends indicate hiring growth in research universities' general counsel offices.

Latest Induce Act Draft Still Buggy

Reportedly the Induce Act has stalled, after the breakdown of negotiations over statutory language. Ernest Miller has the last draft offered by the entertainment industry.

(Notice how the entertainment industry labels its draft as the "copyright owners'" proposal. It takes some chutzpah to call your side the "copyright owners" when the largest copyright-owning industry – the software industry – is on the other side.)

The draft tries makes yet another attempt to define "peer-to-peer". While the last draft's definition was too broad, including, for example, the Web, this one is too narrow. It probably encompasses most or all of the P2P systems currently being used, but its narrowness allows those systems to be redesigned to evade the definition.

Here's the definition:

The term "covered peer-to-peer product" shall mean a widely available device, or computer program for execution on a large number of devices, communicating over the Internet or any other publicly available network and performing or causing the performance at each such device all of the following functions:

(i) providing search information relating to copies or phonorecords available for transmission to other devices;

(ii) locating other devices that provide information relating to copies or phonorecords available for transmission that is responsive to search requests describing desired copies or phonorecords; and

(iii) transmitting a requested copy or phonorecord to another device that located the copy or phonorecord through such other device's performance of the function described in clause (ii);

unless the provider of the device or computer program has the right and ability to control the copies or phonorecords that may be located by its use.

It looks like there are several ways to design a P2P system that evades this definition:

The definition requires each device to do all three of the enumerated functions. A system could have some devices do a subset of the functions.

The product must be a device or a program, which would appear to exempt systems that use multiple programs to perform different functions.

Function (iii) requires that the copy be transmitted to another device, and that other device must have located the copy to be transmitted via function (ii). Data could move through intermediaries that don't use function (ii).

As I've written before, it's awfully hard to come up with a statutory definition of peer-to-peer, because many popular and completely legitimate services on the net are designed in a peer-to-peer style; and because there is nothing special about the particular design strategy used by today's P2P filesharing systems.

Business Week on Chilled Researchers

Heather Green at Business Week has a nice new piece, "Commentary: Are the Copyright Wars Chilling Innovation?" Despite the question mark in the title, it's clear from the piece that innovation is being chilled, especially in the research community.

The piece starts out by retelling the story of the legal smackdown threatened against my colleagues and me over a paper on digital watermarking technology. It goes on to discuss the chilling effect of copyright-related overregulation on others:

Intimidation isn't hard to spot in academia. Aviel Rubin, a Johns Hopkins University professor who last year uncovered flaws in electronic-voting software developed by Diebold Inc. (DBD ), says he spends precious time plotting legal strategies before publishing research connected in any way to copyrights. Matthew Blaze, a computer scientist at the University of Pennsylvania, avoids certain types of computer security-related research because the techniques are also used in copy protection.

The pall has spread over classrooms as well. Eugene H. Spafford, a professor and digital-security expert at Purdue University, and David Wagner, an associate professor of computer science at the University of California at Berkeley, are refusing to take on teaching assignments in certain areas relating to computer security. "The problem isn't that we're worried about prosecution from the government. The problem is the civil lawsuits from the movie and music industries," Spafford says. "I don't have the resources to deal with that."

Rubin, Blaze, Spafford, and Wagner are all leaders in the field, and all are avoiding legitimate and useful research and/or teaching because of the DMCA and laws like it.

The movie industry, as usual, offers nothing but the suspension of disbelief. Fritz Attaway: "It's easy to assert you feel chilled, but I don't see any evidence to support that". This from an industry with a long record of suing technical innovators.

[link via SNTReport.com]

Recent Induce Act Draft

Reportedly, the secret negotiations to rewrite the Induce Act are ongoing. I got hold of a recent staff discussion draft of the Act. It's labeled "10/1" but I understand that the negotiators were working from it as late as yesterday.

I'll be back later with comment.

UPDATE (8 PM): This draft is narrower than previous ones, in that it tries to limit liability to products related "peer-to-peer" infringement. Unfortunately, the definition of peer-to-peer is overbroad. Here's the definition:

the term “peer-to-peer” shall mean any generally available product or service that enables individual consumers’ devices or computers, over a publicly available network, to make a copy or phonorecord available to, and locate and obtain a copy or phonorecord from, the computers or devices of other consumers who make such content publicly available by means of the same or an interoperable product or service, where –

(1) such content is made publicly available among individuals whose actual identities [and electronic mail address] are unknown to one another; and

(2) such program is used in a manner in which there is no central operator of a central repository, index or [directory] who can remove or disable access to allegedly infringing content.

By this definition, the Web is clearly a peer-to-peer system. Arguably, the Internet itself may be a peer-to-peer system as well.

What's the Cybersecurity Czar's Job?

The sudden resignation of Amit Yoran, the Department of Homeland Security's "Cybersecurity Czar", reportedly due to frustration at being bureaucratically marginalized, has led to calls for upgrading of the position, from the third- or fourth-level administrator billet that Yoran held, to a place of real authority in the government. If you're going to call someone a czar you at least ought to give him some power.

But while we consider whether the position should be upgraded, we should also ask what the cybersecurity czar should be doing in the first place.

One uncontroversial aspect of the job is to oversee the security of the government's own computer systems. Doing this will require the ability to knock heads, because departments and offices won't want to change their practices and won't want to spend their budgets on hiring and retaining top quality system administrators. That's one good argument for upgrading the czar's position, perhaps affiliating it with a government-wide Chief Information Officer (CIO) function.

A harder question is what the government or its czar can do about private-sector insecurity. The bully pulpit is fine but it only goes so far. What, if anything, should the government actually do to improve private-sector security?

Braden Cox at Technology Liberation Front argues that almost any government action will do more harm than good.

In an article I wrote last year when Yoran was first appointed, I argued that the federal government has a role to play in cybersecurity, but that it should not be in the business of regulating private sector security. Mandated security audits, stringent liability rules, or minimum standards would not necessarily make software and networks more secure than would a more market-based approach, though it would surely help employ more security consultants and increase the bureaucracy and costs for industry.

Certainly, most of the things the government can do would be harmful. But I don't see the evidence that the market is solving this problem. Despite the announcements that Microsoft and others are spending more on security, I see little if any actual improvement in security.

There's also decent evidence of a market failure in cybersecurity. Suppose Alice buys her software from Max, and Max can provide different levels of security for different prices. If Alice's machine is compromised, she suffers some level of harm, which she will take into account in negotiating with Max. But a breakin to Alice's machine will turn that machine into a platform for attacking others. Alice has no incentive to address this harm to others, so she will buy less than a socially optimal level of security. This is not just a theoretical possibility – huge networks of compromised machines do exist and do sometimes cause serious trouble.

Of course, the existence of a problem does not automatically imply that government action is required. Is there anything productive the government can do to address this market failure?

I can see two possibilities. The first approach is for the government to use its market power, as a buyer of technology, to try to nudge the market in the right direction. Essentially, the government would pay for compromise-resistance, beyond its market incentive to do so, in order to bolster the market for more compromise-resistant software. For example, it might, in deciding what to buy, try to take into account the full social cost of potential breakins to its computers. Exactly how to make this happen, within a budget-conscious bureaucracy, is a challenge that I can't hope to address here.

The second approach government might take is to impose some form of liability, on somebody, for the types of security breaches associated with this market failure. Liability could be placed on the user (Alice, in our example above) or on the technology vendor. There has been lots of talk about the possibility of liability rules, but no clear picture has emerged. I haven't studied the issue enough to have a reliable opinion on whether liability changes are a good idea, but I do know that the idea should not be dismissed out of hand.

What's clear, I think, is that none of these possibilities require a "czar" position of the sort that Yoran held. Steps to improve cybersecurity inside the government need muscle from a CIO type. Changes to liability rules should be studied, but if they are adopted they won't require government staff to administer them. We don't need a czar to oversee the private sector.

Tagged:  

Sin in Haste, Repent at Leisure

Ernest Miller, continuing his stellar coverage of the Induce Act, reports that, according to PublicKnowledge:

An all-star game of private sector legislative drafters will start at 10:30 [today]. There will be representatives from consumer electronics, Verizon, CDT, and others on our team and from the usual suspects on the other team. They are supposed to produce a draft by 4 p.m. That draft will then be, probably revised, to see if it can be marked up next week.

Yes, you read that right: critically important decisions about our national innovation policy need to be made, and a small group has been given a few hours to make them.

The result of this process will be yet another Induce Act draft. Doubtless it will take the same approach – blanket bans on broad classes of behavior, with narrow carveouts to protect the present business plans of the groups in the room – as the previous bad drafts.

How bad have these drafts been? Well, as far as I can tell, the now-current draft would appear to ban the manufacture and sale of photocopy machines by companies like Xerox.

Xerox induces infringement because, when it makes and sells photocopiers, it "engage[s] in conscious and deliberate affirmative acts that a reasonable person would expect to result in widespread [copyright infringement] taking into account the totality of circumstances." After all, everybody knows that photocopiers are sometimes used to infringe, so that widespread distribution of copiers will lead to widespread infringement.

Now we come to the issue of the narrow carveouts. The Induce Act draft does have two subsections that provide carveouts, which appear to be constructed to protect iPods. But those carveouts appear not to protect Xerox. Subsection (C) of the draft exempts some product distributors, but only if the infringements that are induced are entirely private, non-commercial, and done by consumers. This would appear not to protect Xerox, which has many commercial customers. Subsection (D) exempts Xerox's user manuals and advertising, but not the distribution of its copiers, so that doesn't help either. It looks like Xerox would be liable as an inducer under the current draft.

Am I missing something here? Perhaps a reader who is a lawyer can straighten me out. Regardless, this kind of analysis shows the risk induced by the "broad ban; narrow carveouts" approach to tech regulation – the risk that some legitimate business activity will fall outside the carveouts.

This problem is at its worst when regulatory language is written in a hurry, and when only a few stakeholders are invited to participate in drafting it. But that's exactly what is scheduled to be happening, right now, in a conference room in Washington.

Syndicate content