Ed Felten's blog

Conservative Group Takes Conservative Position on Induce Act

The American Conservative Union, an influential right-wing group, has announced its opposition to the Induce Act, and is running ads criticizing those Republicans who support the Act. This should not be surprising, for opposition to the Act is a natural position for true conservatives, who oppose government regulation of technology products and support a competitive marketplace for technology and entertainment.

One sometimes hears the claim that conservatives should support the Induce Act, because that's what big business wants. But thoughtful conservatives support free markets, not giveaways to specific business sectors. And conservatives who understand the economy know that the Induce Act is supported by a few businesses, but opposed by many more, and that the opponents – the computer, electronics, Internet, and software industries – account for a larger and more dynamic portion of the economy than the supporters do.

The Induce Act is a nice litmus test for self-described conservative lawmakers. They can support the Act, and confirm the criticism that conservatism is just a fig-leaf for corporate welfare. Or they can oppose the Act and confirm their own claims to stand for competition and the free market.

The ACU sees this choice for what it is, and opposes the Induce Act. Let's hope that more conservatives join them.

The Least Objectionable Content Labeling System

Today I'll wrap up Vice Week here at Freedom to Tinker with an entry on porn labeling. On Monday I agreed with the conventional wisdom that online porn regulation is a mess. On Tuesday I wrote about what my wife and I do in our home to control underage access to inappropriate material. Today, I'll suggest a public approach to online porn that might possibly do a little bit of good. And as Seth Finkelstein (a.k.a. Eeyore, a.k.a. The Voice of Experience) would probably say, a little bit of good is the best one can hope for on this issue. My approach is similar to one that Larry Lessig sketched in a recent piece in Wired.

My proposal is to implement a voluntary labeling scheme for Web content. It's voluntary, because we can't force overseas sites to comply, so we might as well just ask people politely to participate. Labeling schemes tend not to be adopted if the labels are complicated, or if the scheme requires all sites to be labeled. So I'll propose the simplest possible labels, in a scheme where the vast majority of sites need no labels at all.

The idea is to create a label, which I’ll call "adultsonly" (Lessig calls it "porn" but I think that’s imprecise). Putting the adultsonly tag on a page indicates that the publisher requests that the page be shown only to adults. And that's all it means. There's no official rule about when material should be labeled, and no spectrum of labels. It's just the publisher’s judgment as to whether the material should be shown to kids. You could label an entire page by adding to it an adultsonly meta-tag; or you could label a portion of a page by surrounding it with "adultsonly" and "/adultsonly" tags. This would be easy to implement, and it would be backward compatible since browsers ignore tags that they don't understand. Browsers could include a kids-mode that would hide all adultsonly material.

But where, you ask, is the incentive for web site publishers to label their racy material as adultsonly? The answer is that we create that incentive by decreeing that although material published on the open Internet is normally deemed as having been made available to kids, any material labeled as adultsonly will be deemed as having been made available only to adults. So by labeling its content, a publisher can ensure that the content's First Amendment status is determined by the standard obscenity-for-adults test, rather than the less permissive obscenity-for-kids test. (I'm assuming that such tests will exist and their nature will be determined by immovable politico-legal forces.)

This is a labeling scheme that even a strict libertarian might be able to love. It's simple and strictly voluntary, and it doesn't put the government in the business of establishing fancy taxonomies of harmful content (beyond the basic test for obscenity, which is in practice unchangeable anyway). It's more permissive of speech than the current system, at least if that speech is labeled. This is, I think, the least objectionable content labeling system possible.

Tagged:  

Bots Play Backgammon Too

Responding to my entry yesterday about pokerbots, Jordan Lampe emails a report from the world of backgammon. Backgammon bots play at least as well as the best human players, and backgammon is often played for money, so the temptation to use bots in online play is definitely there.

Most people seem to be wary of this practice, and the following
countermeasures have been developed (not necessarily exclusive or all
used by the same person)

1) Don't play for money; only play for fun

2) Play money only against people you know [well]

3) Against somebody who takes a long time after every move, you are
suspicious that they are plugging their moves into computers

4) At the end of the game, you can analyze your game with one of the
computer programs. It turns out that all the computers rate each
other's play very highly, with an error rate of 0-1.5 "millipoints" per
move. If you get a rate of exactly 0 you can be dead certain they are
using the same computer program. Computers rate the best humans in the
world in the 3-4 range. In any case, if your opponent is using a
computer program to decide all his moves it is fairly easy to tell after
only a few games, and then avoid playing with that player any more.

5) Some players take the attitude, "if I lose, at least I'll have
learned something" and therefore ignore if they are playing bots

6) Using a bot to help you win is, well, boring, and so it doesn't
happen that much anyway

Having played a lot of poker and backgammon in my day, I suspect that distinguishing human play from computer play would be harder in poker than it is in backgammon. For one thing, in backgammon you always know what information your opponent had in choosing a certain move (both players have the same information at all times); but in poker you may never know what your opponent knew or believed at a particular point in time. Also, a good poker player is always trying to frustrate opponents' attempts to build mental models of his decision processes; this type of misdirection, which a good bot will emulate by using randomized algorithms, will make it harder to distinguish similar styles of play.

Jordan identifies another factor that several poker players mentioned as well: the fact that most gambling income is made by separating weak players from their money. As long as there are enough "fish", all of the sharks, whether human or not, will feast. When the stakes get high, the fish will be driven out; but at low stakes, good human players may still make money.

Tagged:  

Online Poker and Unenforceable Rules

Computerized "bots" may be common in online poker games according to a Mike Brunker story at MSNBC.com. I have my doubts about the prevalence today of skillful, fully automated pokerbots, but there is an interesting story here nonetheless.

Most online casinos ban bots, but there is really no way to enforce such a rule. Already, many online players use electronic assistants that help them calculate odds, something that world-class players are adept at doing in their heads. Pokerbot technology will only advance, so that even if bots don't outplay people now, they will eventually. (The claim, sometimes heard, that computers cannot understand bluffing in poker, is incorrect. Game theory can predict and explain bluffing behavior. A good pokerbot will bluff sometimes.)

Once bots are better than people, it's hard to see why a rational person, with real money at stake, would fail to use a bot. Sure, watching your bot play is less fun than playing yourself; but losing to a bunch of bots isn't much fun either. Old-fashioned human vs. human play will still be seen in very-low-stakes online games, where it's not worth the trouble of deploying a bot, and in in-person games where the non-botness of players can be checked.

The online casinos are kidding themselves if they think they can enforce a no-bots rule. How can they tell what a player is doing in the privacy of his own home? Even if they can tell that a human's hands are on the keyboard, how can they tell whether that human is getting advice from a bot?

The article discusses yet another unenforceable rule of online poker: the ban on collusion between players. If two or more players simply show each other their cards, they gain an advantage over the others at the table. There's no way for an online casino to prevent players from conducting back-channel communications, so a ban on collusion is impossible to enforce.

By reiterating their anti-bot and anti-collusion rules, and by claiming to have mysterious enforcement mechanisms, online casinos may be able to stem the tide of cheating for a while. But eventually, bots and collusion will become the norm, and lone human players will be driven out of all but the lowest stakes games.

But there is another strategy. An online casino could encourage bots, and even set up bots-only games. The game would then become not a human vs. human card game but a human vs. human battle between bot designers for geekly mastery. I'll bet there are plenty of programmers out there who would like to give it a try.

Tagged:  

Voluntary Filtering Works for Us

It's day two of porn week here at Freedom to Tinker, and time to talk about the tools parents have to limit what their kids see. As a parent, I have not only an opinion, but also an actual household policy (set jointly with my wife, of course) on this topic.

Like most parents, we want to limit what our kid sees. The reason is not so much that there are things we want our kid never to see, but more that we don't think our kid is ready, yet, to see and hear absolutely everything in the world. Even the Cookie Monster is scary to kids at a certain age. Good parents know what their kids can handle alone, and what their kids can handle with a trusted adult present. We want to expose our kid to certain things gradually. Some things should be seen for the first time with a parent present to talk about what is being depicted.

But how can we do this, in the real world? It's not enough simply to say that we should supervise our kid. To watch a kid nonstop, 24/7, is not only impractical but creepy. We don't want to turn our home into a surveillance state.

Instead, we rely on architecture. For example, we put the only kid-accessible computer and TV in the busiest room of the house so that we're less likely to lose track of what's happening. But even that isn't foolproof – it doesn't work in the early morning hours when kids tend to be up while parents sleep.

This is where filtering technology can help. We find the TV rating and filtering system quite useful, despite its obvious flaws. This system is often called the V-chip, but we don't actually rely on the V-chip itself. Instead, we rely on our Tivo to allow restrict access to shows with certain ratings, unless a secret password has been entered. We know that the technology overblocks and underblocks. But overall, we prefer a policy of "watch any kid-rated show you want, but ask a parent if you want to watch anything else" to the alternatives of "watch anything you want" or "always ask a parent first". (A welcome side-effect: by changing the rating threshold we can easily implement a "no TV today" policy.)

It's worth noting that we don't use the federally mandated V-chip, which is built into our TV. We simply use the ratings associated with shows, and the parental controls that Tivo included voluntarily in its product. For us, the federal V-chip regulation provided, at most, the benefit of speeding standardization of the rating system. We're happy with a semi-accurate, voluntary system that saves us time but doesn't try to override our own judgment.

Tagged:  

Online Porn Issue Not Going Away

Adam Thierer at Technology Liberation Front offers a long and interesting discussion of the online porn wars, in the form of a review of two articles by Jeffrey Rosen and Larry Lessig. I've been meaning to write about online porn regulation for a while, and Thierer's post seems like a good excuse to address that topic now.

Recent years have seen a series of laws, such as the Communications Decency Act (CDA) and the Child Online Protection Act (COPA), aimed at restricting access to porn by minors, that have been the subject of several important court decisions. These cases have driven a blip in interest, and commentary, on online porn regulation.

The argument of Rosen's article is captured in its title: "The End of Obscenity."
Rosen argues that it's only a matter of time before the very notion of obscenity – a word which here means "porn too icky to receive First Amendment protection" – is abandoned. Rosen makes a two-part argument for this proposition. First, he argues that the Miller test – the obscenity-detection rule decreed by the Supreme Court in the 1970's – is no longer tenable. Second, he argues that porn is becoming socially acceptable. Neither claim is as strong as Rosen claims.

The Miller test says that material is obscene if it meets all three of these criteria: (1) the average person, applying contemporary community standards, would find it is designed to appeal to the prurient interest; (2) it depicts [icky sexual stuff]; and (3) taken as a whole, it lacks serious literary, artistic, scientific, or political value.

Rosen argues that the "community standards" language, which was originally intended to account for differences in standards between, say, Las Vegas and Provo, no longer makes sense now that the Internet makes the porn market international. How is an online porn purveyor to know whether he is violating community standards somewhere? The result, Rosen argues, must be that the most censorious community in the U.S. will impose its standards on everybody else.

The implication of Rosen's argument is that, for the purposes of porn distribution, the whole Internet, or indeed the whole nation, is essentially a single community. Applying the standards of the national community would seem to solve this problem – and the rest of Rosen's essay supports the notion that national standards are converging anyway.

The other problem with the Miller standard is that it's hopelessly vague. This seems unavoidable with any standard that divides obscene from non-obscene material. As long as there is a legal and political consensus for drawing such a line, it will be drawn somewhere; so at best we might replace the Miller line with a slightly clearer one.

Which brings us to the second, and more provocative, part of Rosen's essay, in which he argues that community standards are shifting to make porn acceptable, so that the very notion of obscenity is becoming a dinosaur. There is something to this argument – the market for online porn does seem to be growing – but I think Rosen goes too far. It's one thing to say that Americans spend $10 billion annually on online porn, but it's another thing entirely to say that a consensus is developing that all porn should be legal. For one thing, I would guess that the vast majority of that $10 billion is spent on material that is allowed under the Miller test, and the use of already-legal material does not in itself indicate a consensus for legalizing more material.

But the biggest flaw in Rosen's argument is that the laws at issue in this debate, such as the CDA and COPA, are about restricting access to porn by children. And there's just no way that the porn-tolerant consensus that Rosen predicts will extend to giving kids uncontrolled access to porn.

It looks like we're stuck with more of less the current situation – limits on porn access by kids, implemented by ugly, messy law and/or technology – for the foreseeable future. What, if anything, can we do to mitigate this mess? I'll address that question, and the Lessig essay, later in the week.

Tagged:  

Bike Lock Fiasco

Kryptonite may stymie Superman, but apparently it's not much of a barrier to bike thieves. Many press reports (e.g., Wired News, New York Times, Boston Globe) say that the supposedly super-strong Kryptonite bike locks can be opened by jamming the empty barrel of a Bic ballpoint pen into the lock and turning clockwise. Understandably, this news has spread like wildfire on the net, especially after someone posted a video of the Bic trick in action. A bike-store employee needed only five seconds to demonstrate the trick for the NYT reporter.

The Kryptonite company is now in a world of hurt. Not only is their reputation severely damaged, but they are on the hook for their anti-theft guarantee, which offers up to $3500 to anybody whose bike is stolen while protected by a Kryptonite lock. The company says it will offer an upgrade program for owners of the now-suspect locks.

As often happens in these sorts of stories, the triggering event was not the discovery of the Bic trick, which had apparently been known for some time among lock-picking geeks, but the diffusion of this knowledge to the general public. The likely tipping point was a mailing list message by Chris Brennan, who had his Kryptonite-protected bike stolen and shortly thereafter heard from a friend about the Bic trick.

I have no direct confirmation that people in the lock-picking community knew this before. All I have is the words of a talking head in the NYT article. [UPDATE (11 AM, Sept. 17): Chris at Mutatron points to a 1992 Usenet message describing a similar technique.] But if it is true that this information was known, then the folks at Kryptonite must have known about it too, which puts their decision to keep selling the locks, and promoting them as the safest thing around, in an even worse light, and quickens the pulses of product liability lawyers.

Whatever the facts turn out to be, this incident seems destined to be Exhibit 1 in the debate over disclosure of security flaws. So far, all we know for sure is that the market will punish Kryptonite for making security claims that turned out to be very wrong.

UPDATE (11:00 AM): The vulnerability here seems to apply to all locks that have the barrel-type lock and key used on most Kryptonite bike locks. It would also apply, for example, to the common Kensington-style laptop locks, and to the locks on some devices such as vending machines.

Tagged:  

DRM and the Market

In light of yesterday's entry on DRM and competition, and the ensuing comment thread, it's interesting to look at last week's action by TiVo and ReplayTV to limit their customers' use of pay-per-view content that the customers have recorded.

If customers buy access to pay-per-view content, and record that content on their TiVo or ReplayTV video recorders, the recorders will now limit playback of that content to a short time period after the recording is made. It's not clear how the recorders will recognize affected programs, but it seems likely that some kind of signal will be embedded in the programs themselves. If so, this looks a lot like a kind of broadcast-flag technology, applied, ironically, only to programs that consumers have already paid a special fee to receive.

It seems unlikely that TiVo and ReplayTV made the decision to adopt this technology more or less simultaneously. Perhaps there was some kind of agreement between the two companies to take this action together. This kind of agreement, between two companies that together hold most of the personal-video-recorder market, to reduce product functionality in a way that either company, acting alone, would have a competitive disincentive to adopt, seems to raise antitrust issues.

Even so, these are not the only two entries in the market. MythTV, the open-source software replacement, is unlikely to make the same change; so this development will only make MythTV look better to consumers. Perhaps the market will push back, by giving more business to MythTV. True, MythTV is now too hard to for ordinary consumers to use. But if MythTV is as good as people say, it's only a matter of time before somebody packages up a "MythTV system in a box" product that anybody can buy and use.

Tagged:  

Self-Help for Consumers

Braden Cox at Technology Liberation Front writes about a law school symposium on "The Economics of Self-Help and Self-Defense in Cyberspace". Near the end of an interesting discussion, Cox says this:

The conference ended with Dan Burk at Univ of Minnesota Law School giving a lefty analysis for how DRM will be mostly bad for consumers unless the government steps in and sets limits that preserve fair use. I had to challenge him on this one, and asked where is the market failure here? Consumers will get what they demand, and if some DRM is overly restrictive there will be companies that will provide more to consumers. He said that the consumers of DRM technology are not the general public, but the recording companies, and because society-at-large is not properly represented in this debate the government needs to play a larger role.

I would answer Cox's question a bit differently. I'm happy to agree with Cox that the market, left to itself, would find a reasonable balance between the desires of media publishers and consumers. But the market hasn't been left to itself. Congress passed the DMCA, which bans some products that let consumers exercise their rights to make noninfringing use (including fair use) of works.

The best solution would be to repeal the DMCA, or at least to create a real exemption for technologies that enable fair use and other lawful uses. If that's not possible, and Congress continues to insist on decreeing which media player technologies can exist, the second-best solution is to make those decrees more wisely.

Because of the DMCA, consumers have not gotten what they demand. For example, many consumers demand a DVD player that runs on Linux, but when somebody tried to build one it was deemed illegal.

Perhaps the Technology Liberation Front can help us liberate these technologies.

Tagged:  

Security by Obscurity

Adam Shostack points to a new paper by Peter Swire, entitled "A Model for When Disclosure Helps Security". How, Swire asks, can we reconcile the pro-disclosure "no security by obscurity" stance of crypto weenies with the pro-secrecy, "loose lips sink ships" attitude of the military? Surely both communities understand their own problems; yet they come to different conclusions about the value of secrecy.

Swire argues that the answer lies in the differing characteristics of security problems. For example, when an attacker can cheaply probe a system to learn how it works, secrecy doesn't help much; but when probing is impossible, expensive, or pointless, secrecy makes more sense.

This is a worthwhile discussion, but I think it slightly misses the point of the "no security by obscurity" principle. The point is not to avoid secrecy altogether; that would almost never be feasible. Instead, the point is to be very careful about what kind of secrecy you rely on.

"Security by obscurity" is really just a perjorative term for systems that violate Kerckhoffs' Principle, which says that you should not rely on keeping an algorithm secret, but should only rely on keeping a numeric key secret. Keys make better secrets than algorithms do, for at least two reasons. First, it's easy to use different keys in different times and places, thereby localizing the effect of lost secrets; but it's hard to vary your algorithms. Second, if keys are generated randomly then we can quantify the effort required for an adversary to guess them; but we can't predict how hard it will be for an adversary to guess which algorithm we're using.

So cryptographers do believe in keeping secrets, but are very careful about which kinds of secrets they keep. True, the military's secrets sometimes violate Kerckhoffs' principle, but this is mainly because there is no alternative. After all, if you have to get a troopship safely across an ocean, you can't just encrypt the ship under a secret key and beam it across the water. Your only choice is to rely on keeping the algorithm (i.e., the ship's route) secret.

In the end, I think there's less difference between the methods of cryptographers and the military than some people would think. Cryptographers have more options, so they can be pickier about which secrets to keep; but the military has to deal with the options it has.

Tagged:  
Syndicate content