Ed Felten's blog

Used Hard Disks Packed with Confidential Information

Simson Garfinkel has an eye-opening piece in CSO magazine about the contents of used hard drives. Simson bought a pile of used hard drives and systematically examined them to see what could be recovered from them.

I took the drives home and started my own forensic analysis. Several of the drives had source code from high-tech companies. One drive had a confidential memorandum describing a biotech project; another had internal spreadsheets belonging to an international shipping company.

Since then, I have repeatedly indulged my habit for procuring and then analyzing secondhand hard drives. I bought recycled drives in Bellevue, Wash., that had internal Microsoft e-mail (somebody who was working from home, apparently). Drives that I found at an MIT swap meet had financial information on them from a Boston-area investment firm.

...

One of the drives once lived in an ATM. It contained a year's worth of financial transactions

Tagged:  

Lawyers, Lawyers Everywhere

Frank Field points to an upcoming symposium at Seton Hall on "Peer to Peer at the Crossroads: New Developments and New Directions for the Law and Business of Peer-to-Peer Networking". Here's a summary from the symposium announcement:

This Symposium will review recent developments in the law and business of peer-to-peer networks, with a view to determining where the law is going and where it should go. We will examine both the theoretical and practical implications of recent decisions and legislative initiatives, and will offer different perspectives on where the intersection between P2P technology and the law should lie. Our panelists include scholars and practitioners as well as representative from the U.S. Copyright Office.

This sounded pretty good. But reading the announcement more carefully, I noticied something odd: the speakers are all lawyers. If you're having a conference whose scope includes business and technology, it seems reasonable to have at least some representation from the technology or business communities. Maybe on the panel about "Business Models, Technology, and Trends"?

Now I have nothing against lawyers. Some lawyers really understand technology. A few even understand it deeply. But if I were running a conference on law and technology, and I invited only technologists to speak, this would be seen, rightly, as a big problem. It wouldn't be much of an excuse for me to say that those technologists know a lot about the law. If I'm inviting ten speakers for a conference on technology and the law, surely I have one slot for somebody whose primary expertise is in the law.

Yet the same argument, running in the other direction, seems not to apply sometimes. Why not?

Tagged:  

Security Attacks on Security Software

A new computer worm infects PCs by attacking security software, according to a Brian Krebs story in Saturday's Washington Post. The worm exploits flaws in two personal firewall products, made by Black Ice and Real Secure Internet. Just to be clear: the firewalls' flaw is not that they fail to stop the worm, but that they actively create a hole that the worm exploits. People who didn't buy these firewalls are safe from the worm.

This has to be really embarrassing for the vendor, ISS. The last thing a security product should do is to create more vulnerabilities.

This problem is not unique. Last week, another security product, Norton Internet Security, had a vulnerability reported.

Consumers are still better off, on balance, using PC security products. On the whole, these products close more holes than they open. But this is a useful reminder that all network software caries risks. Careful software engineering is needed everywhere, and especially for security products.

Tagged:  

Gleick on the Naming Conundrum

James Gleick has an interesting piece in tomorrow's New York Times Magazine, on the problems associated with naming online. If you're already immersed in the ICANN/DNS/UDRP acronym complex, you won't learn much; but if you're not a naming wonk, you'll find the piece a very nice introduction to the naming wars.

New Survey of Spam Trends

The Pew Internet & American Life Project has released results of a new survey of experiences with email spam.

The report's headline is "The CAN-SPAM Act Has Not Helped Most Email Users So Far", and this interpretation is followed by the press articles I have seen so far. But it's not actually supported by the data. Taken at face value, the data show that the amount of spam has not changed since January 1, when the CAN-SPAM Act took effect.

If true, this is actually good news, since the amount of spam had been increasing previously; for example, according to Brightmail, spam had grown from 7% of all email in April 2001, to 50% in September 2003. If the CAN-SPAM Act put the brakes on that increase, it has been very effective indeed.

Of course, the survey demonstrates only correlation, not causality. The level of spam may be steady, but there is nothing in the survey to suggest that CAN-SPAM is the reason.

An alternative explanation is hiding in the survey results: fewer people may be buying spammers' products. Five percent of users reported having bought a product or service advertised in spam. That's down from seven percent in June 2003. Nine percent reported having responded to a spam and later discovered it was phony or fraudulent; that's down from twelve percent in June 2003.

And note that the survey asked whether the respondent had ever responded to a spam, so the decrease in recent response rates would be much more dramatic. To understand why, imagine a group of 200 people who responded to the latest survey. Suppose that 100 of them are Recent Adopters, having started using the Internet since June 2003, and that the other 100 are Longtime Users who went online before June 2003. According to the previous survey, seven of the Longtime Users (i.e., 7%) bought from a spammer before June 2003; and according to the latest survey, only ten of our overall group of 200 users (i.e., 5%) have ever bought from a spammer. It follows that only three of our other 190 hypothetical users responded to a spam since June 2003, so that spammers are finding many fewer new buyers than before.

A caveat is in order here. The survey's margin of error is three percent. so we can't be certain there's a real trend here. But still, it's much more likely than not that the number of responders really has decreased.

Tagged:  

ATM Crashes to Windows Desktop

Yesterday, an ATM in Baker Hall at Carnegie Mellon University crashed, or had some kind of software error, and ended up displaying the Windows XP desktop. Some students started Windows Media Player on it, playing a song that comes preinstalled on Windows XP machines. Students took photos and movies of this.

There's no way to tell whether the students, starting with the Windows desktop, would have been able to eject the ATM's stock of cash. As my colleague Andrew Appel observes, it's possible to design an ATM in a way that prevents it from dispensing cash without the knowledge and participation of a computer back at the bank. For example, the cash dispensing hardware could require some cryptographic message from the bank's computer before doing anything. Then again, it's possible to design a Windows-based ATM that never (or almost never) displays the Windows desktop, failing instead into a "technical difficulties – please call customer service" screen, and the designers apparently didn't adopt that precaution.

A single, isolated failure like this isn't, in itself, a big deal. Every ATM transaction is recorded and audited. Banks have the power to adopt loss-prevention technology; they have good historical data on error rates and losses; and they absorb the cost of both losses and loss-prevention technology. So it seems safe to assume that they are managing these kinds of risks rationally.

Tagged:  

Good News: Election Error Found in California

From Kim Zetter at wired.com comes the story at of the recent Napa County, California election. Napa County uses paper ballots that are marked by the voter with a pen or pencil, and counted by an optical scanner machine.

Due to a miscalibrated scanner, some valid votes went uncounted, as the scanner failed to detect the markings on some ballots. The problem was discovered during a random recount of one percent of precincts. The ballots are now being recounted with properly calibrated scanners, and the recount might well affect the election’s result.

Although a mistake was made in configuring the one scanner, the good news is that the system was robust enough to catch the problem. The main source of this robustness lies in the paper record, which could be manually examined to determine whether there was a problem, and could be recounted later when a problem was found. Another important factor was the random one percent recount, which brought the problem to light.

Our biggest fear in designing election technology should not be that we’ll make a mistake, but that we’ll make a mistake and fail to notice it. Paper records and random recounts help us notice mistakes and recover from them. Paperless e-voting systems don’t.

Did I mention that the Holt e-voting bill, H.R. 2239, requires paper trails and random recounts?

[Link via Peter Neumann's RISKS Forum.]

Tagged:  

Solum's Response on .mobile

Larry Solum, at Legal Theory Blog, responds to my .mobile post from yesterday. He also points to a recently published paper he co-authored with Karl Mannheim. The paper looks really interesting.

Solum's argument is essentially that creating .mobile would be an experiment, and that the experiment won't hurt anybody. If nobody adopts .mobile, the experiment will have no effect at all. And if some people like .mobile and some don't, those who like it will benefit and the others won't be harmed. So why not try the experiment? (Karl-Friedrich Lenz made a similar comment.)

The Mannheim/Solum paper argues that ICANN should let a thousand gTLDs bloom, and should use auctions to allocate the new gTLDs. (gTLDs are Generic Top-Level Domains such as .com, .org, or .union) The paper argues persuasively for this policy.

If ICANN were following the Mannheim/Solum policy, or some approximation to it, I would agree with Solum's argument and would be happy to see the .mobile experiment proceed. (But I would still bet on its failure.) No evidence for its viability would be needed, beyond the sponsors' willingness to outbid others for the rights to that gTLD.

But today's ICANN policy is to authorize very few gTLDs, and to allocate them administratively. In the context of today's policy, and knowing that the creation of one new gTLD will be used to argue against the creation of others, I think a strong case needs to be made for any new gTLD. The proponents of .mobile have not made such a case. Certainly, they have not offered a convincing argument that theirs is the best way to allocate a new gTLD, or even that their is the best way to allocate the name .mobile.

Why We Don't Need .mobile

A group of companies is proposing the creation of a new Internet top level domain called ".mobile", with rules that require sites in .mobile to be optimized for viewing on small-display devices like mobile phones.

This seems like a bad idea. A better approach is to let website authors create mobile-specific versions of their sites, but serve out those versions from ordinary .com addresses. A mobile version of weather.com, for example, would be served out from the weather.com address. The protocol used to fetch webpages, HTTP, already tells the server what kind of device the content will be displayed on, so the server could easily send different versions of a page to different devices. This lets every site have a single URL, rather than having to promote separate URLs for separate purposes; and it lets any page link to any other page with a single hyperlink, rather than an awkward "click here on mobile phones, or here on other devices" construction.

The .mobile proposal looks like a textbook example of Lessig's point about how changing the architecture of the net can increase its regulability. .mobile would be a regulated space, in the sense that somebody would make rules controlling how sites in .mobile work. And this, I suspect, is the real purpose of .mobile – to give one group control over how mobile web technology develops. We're better off without that control, letting the technology develop on its own over in the less regulated .com.

We already have a regulated subdomain, .kids.us, and that hasn't worked out too well. Sites in .kids.us have to obey certain rules to keep them child-safe; but hardly any sites have joined .kids.us. Instead, child-safe sites have developed in .com and .org, and parents who want to limit what their kids see on the net just limit their kids to those sites.

If implemented, .mobile will probably suffer the same fate. Sites will choose not to pay extra for the privilege of being regulated. Instead, they'll stay in .com and focus on improving their product.

An Inexhaustible Supply of Bugs

Eric Rescorla recently released an interesting paper analyzing data on the discovery of security bugs in popular products. I have some minor quibbles with the paper’s main argument (and I may write more about that later) but the data analysis alone makes the paper worth reading. Briefly, what Eric did is to take data about reported security vulnerabilities, and fit it to a standard model of software reliability. This allowed him to estimate the number of security bugs in popular software products and the rate at which those bugs will be found in the future.

When a product version is shipped, it contains a certain number of security bugs. Over time, some of these bugs are found and fixed. One hopes that the supply of bugs is depleted over time, so that it gets harder (for both the good guys and the bad guys) to find new bugs.

The first conclusion from Eric’s analysis is that there are many, many security bugs. This confirms the expectations of many security experts. My own rule of thumb is that typical release-quality industrial code has about one serious security bug per 3,000 lines of code. A product with tens of millions of lines of code will naturally have thousands of security bugs.

The second conclusion is a bit more surprising: there is little if any depletion of the bug supply. Finding and fixing bugs seems to have a small effect, or no effect at all, on the rate at which new bugs are discovered. It seems that the supply of security bugs is practically inexhaustible.

If true, this conclusion has profound implications for how we think about software security. It implies that once a version of a software product is shipped, there is nothing anybody can do to improve its security. Sure, we can (and should) apply software patches, but patching is just a treadmill and not a road to better security. No matter how many bugs we fix, the bad guys will find it just as easy to uncover new ones.

Tagged:  
Syndicate content