All Posts

This page shows all posts from all authors. Selected posts also appear on the front page.

Do University Honor Codes Work?

Rick Garnett over at ProfsBlawg asked his readers about student honor codes and whether they work. His readers, who seem to be mostly lawyers and law students, chimed in with quite a few comments, most of them negative.

I have dealt with honor codes at two institutions. My undergraduate institution, Caltech, has a simply stated and all-encompassing honor code that is enforced entirely by the students. My sense was that it worked very well when I was there. (I assume it still does.) Caltech has a small (800 students) and relatively homogeneous student body, with a student culture that features less student versus student competitiveness than you might expect. Competition there tends to be student versus crushing workload. The honor code was part of the social contract among students, and everybody appreciated the benefits it provided. For example, you could take your final exams at the time and place of your choosing, even if they were closed-book and had a time limit; you were trusted to follow the rules.

Contrasting this to the reports of Garnett's readers, I can't help but wonder if honor codes are especially problematic in law schools. There is reportedly more cutthroat competition between law students, which could be more conducive to ethical corner-cutting. Competitiveness is an engine of our adversarial legal system, so it's not surprising to see law students so eager to win every point, though it is disappointing if they do so by cheating.

I've also seen Princeton's disciplinary system as a faculty member. Princeton has a student-run honor code system, but it applies only to in-class exams. I don't have any first-hand experience with this system, but I haven't heard many complaints. I like the system, since it saves me from the unpleasant and trust-destroying task of policing in-class exams. Instead, I just hand out the exams, then leave the room and wait nearby to answer questions.

Several years ago, I did a three-year term on Princeton's Student-Faculty Committee on Discipline, which deals with all serious disciplinary infractions, whether academic or non-academic, except those relating to in-class exams. This was hard work. We didn't hear a huge number of cases, but it took surprisingly long to adjudicate even seemingly simple cases. I thought this committee did its job very well.

One interesting aspect of this committee was that faculty and students worked side by side. I was curious to see any differences between student and faculty attitudes toward the disciplinary process, but it turned out there were surprisingly few. If anything, the students were on average slightly more inclined to impose stronger penalties than the faculty, though the differences were small and opinions shifted from case to case. I don't think this reflected selection bias either; discussions with other students over the years have convinced me that students support serious and uniform punishment for violators. So I don't think there will be much difference in the outcomes of a student-run versus a faculty-run disciplinary process.

One lesson from Garnett's comments is that an honor code will die if students decide that enforcement is weak or biased. Here the secrecy of disciplinary processes, which is of course necessary to protect the accused, can be harmful. Rumors do circulate. Sometimes they're inaccurate but can't be corrected without breaching secrecy. For example, when I was on Princeton's discipline committee, some students believed that star athletes or students with famous relatives would be let off easier. This was untrue, but the evidence to contradict it was all secret.

Academic discipline seems to have a major feedback loop. If students believe that the secret disciplinary processes are generally fair and stringent, they will be happy with the process and will tend to follow the rules. This leaves the formal disciplinary process to deal with the exceptions, which a good process will be able to handle. Students will buy in to the premise of the system, and most people will be happy.

If, on the other hand, students lose their trust in the fairness of the system, either because of false rumors or because the system is actually unfair, then they'll lose their aversion to rule-breaking and the system, whether honor-based or not, will break down. Several of Garnett's readers tell a story like this.

One has to wonder whether it makes much difference in practice whether a system is formally honor-based or not. Either way, students have an ethical duty to follow the rules. Either way, violations will be punished if they come to light. Either way, at least a few students will cheat without getting caught. The real difference is whether the institution conspicuously trusts the students to comply with the rules, or whether it instead conspicuously polices compliance. Conspicuous trust is more pleasant for everybody, if it works.

[Feel free to talk about your own experiences in the comments. I'm especially eager to hear from current or past Princeton students.]

Tagged:  

Breathalyzers and Open Source

Lawyers for 150 Floridians accused of drunk driving have asked a court to order the disclosure of the source code for software running in the breathalyzer machines used by police to analyze their blood alcohol level, according to a Tom Sanders story on vunet.

The defendants say they have the right to examine the machines that accused them, and that a meaningful examination requires access to the machines' software. Prosecutors say the code is a trade secret.

The accused are right that one needs the code to understand fully how the machines work. The machines consist of sensors, a user interface, and control software. The software is the "brain" of the machine, and it is almost certainly involved in the calculations that derive a blood alcohol value from the sensor readings, as well as the display of the calculated value. If the accused have the right to fully examine the machines – and the article says that they do under Florida law – then they should see the source code.

Contrary to the article and some other commentators, this is not a dispute over whether the software should be open source. The accused aren't seeking to open the software to everybody; they only want it opened to their legal teams.

There are standard practices for handling trade-secret information that must be turned over in court cases. A court will typically establish a protective order, which is a kind of nondisclosure agreement covering secret material that is turned over by one side to the other. The protective order will require parties to keep the information secret and to use it only for purposes related to the court proceedings. Typically the information can be turned over to a limited number of expert analysts who have also signed the protective order. Documents containing secret information are filed under seal, and testimony about secret matters may take place in a closed courtroom.

So this issue is not about open source, but about ensuring fairness for the accused. If they're going to be accused based on what some machine says, then they ought to be allowed to challenge the accuracy of the machine. And they can't do that unless they're allowed to know how the machine works.

You might argue that the machine's technical manuals convey enough information. Having read many manuals and examined the innards of many software systems, I'm skeptical of such claims. Often, knowing how the maker says a machine works is a poor substitute for knowing how it actually works. If a machine is flawed, it's likely the maker will either (a) not know about the flaw or (b) be unwilling to admit it exists.

If the article's description of Florida law is correct, this seems like a pretty easy decision for the court.

Mossberg Takes on DRM, Urges CD-DRM Boycott

Walt Mossberg, whose Personal Technology column in the Wall Street is a must-read for many influential but non-geeky technology enthusiasts, discusses the DRM issue in today's column. No much in the column will be new to regular readers here, or to anyone immersed in the digital copyright issue. But of course Mossberg writes for a different audience, and the column serves that audience well by explaining the issues clearly and maintaining a moderate tone.

In my view, both sides have a point, but the real issue isn't DRM itself – it's the manner in which DRM is used by copyright holders. Companies have a right to protect their property, and DRM is one means to do so. But treating all consumers as potential criminals by using DRM to overly limit their activities is just plain wrong.

Let's be clear: The theft of intellectual property on the Internet is a real problem. Millions of copies of songs, TV shows and movies are being distributed over the Internet by people who have no legal right to do so, robbing media companies and artists of rightful compensation for their work.

Even if you think the record labels and movie studios are stupid and greedy, as many do, that doesn't entitle you to steal their products. If your local supermarket were run by people you didn't like, and charged more than you thought was fair, you wouldn't be entitled to shoplift Cheerios from its shelves.

On the other hand, I believe that consumers should have broad leeway to use legally purchased music and video for personal, noncommercial purposes in any way they want – as long as they don't engage in mass distribution. They should be able to copy it to as many personal digital devices as they own, convert it to any format those devices require, and play it in whatever locations, at whatever times, they choose.

Mossberg urges music and movie companies to use DRM to limit large-scale pirates, while giving ordinary users wide leeway for personal use.

Instead of using DRM to stop some individual from copying a song to give to her brother, the industry should be focusing on ways to use DRM to stop the serious pirates – people who upload massive quantities of music and videos to so-called file-sharing sites, or factories in China that churn out millions of pirate CDs and DVDs.

This is a nice vision, but it's not really possible. It's abundantly clear by now that no DRM system can stop serious pirates. A DRM system that stops serious pirates, and simultaneously gives broad leeway to ordinary users, is even harder to imagine. It's not going to happen.

Although he doesn't address it directly, Mossberg implicitly rejects the other argument for DRM, which says that DRM can enable new pricing models for content and can therefore foster market efficiency. Mossberg says flatly that consumers should have a broad right to make personal uses of content they have bought.

The most surprising part of the column – remember that this is in the Wall Street Journal – is Mossberg's call for a boycott of products with restrictive DRM, such as copy-protected CDs.

Until then, I suggest that consumers avoid stealing music and videos, but also boycott products like copy-protected CDs that overly limit usage and treat everyone like a criminal. That would send the industry a message to use DRM more judiciously.

Whether it's a flat boycott, or just a disinclination to buy such products, this would have an impact on the industry's DRM choices.

To make it happen, people need to learn which CDs use DRM and which don't. One way to tell on CDs is to look for the official CD logo on the package. If the CD logo is missing, the disc probably doesn't comply with the CD standard, and the noncompliance is probably caused by DRM. Alternatively, somebody could set up a website with information about which discs used DRM. It would be nice, too, to have a site with information about DVDs, to keep track, for instance, of which discs force viewers to watch movie previews before seeing the movie they bought.

It can't be too hard to set up such a site. If you put ads on it, you could probably make a profit. Who wants to build it?

Tagged:  

EFF Researchers Decode Hidden Codes in Printer Output

Researchers at the EFF have apparently confirmed that certain color printers put hidden marks in the pages they print, and they have decoded the marks for at least one printer model.

The marks from Xerox DocuColor printers are encoded in an array of very small yellow dots that appear all over the page. The dots encode the date and time when the page was printed, along with what appears to be a serial number for the printer. You can spot the dots with blue light and a 10X magnifier, and you can then decode the dots to get the date, time, and serial number.

Many other printers appear to do something similar; the EFF has a list.

The privacy implications are obvious. It's now possible to tell when a document was printed, and when two documents were printed on the same printer. It's also possible, given a document and a printer, to tell whether the document was printed on that printer.

Apparently, this was done at direction of the U.S. government.

The U.S. Secret Service admitted that the tracking information is part of a deal struck with selected color laser printer manufacturers, ostensibly to identify counterfeiters. However, the nature of the private information encoded in each document was not previously known.

...

Xerox previously admitted that it provided these tracking dots to the government, but indicated that only the Secret Service had the ability to read the code.

The assertion that only the Secret Service can read the code is false. The code is quite straightforward. For example, there is one byte for (the last two digits of) the year, one byte for the month, one byte for the day, one byte for the hour, and one byte for the minute.

Now that the code is known, it should be possible to forge the marks. For example, I could cook up an array of little yellow dots that encode any date, time, and serial number I like. Then I could add the dots to any image I like, and print out the image-plus-dots on a printer that doesn't make the marks. The resulting printout would have genuine-looking marks that contain whatever information I chose.

This could have been prevented by using cryptography, to make marks that can only be decoded by the Secret Service, and that don't allow anyone but the secret service to detect whether two documents came from the same printer. This would have added some complexity to the scheme, but that seems like a good tradeoff in a system that was supposed to stay secret for a while.

Tagged:  

A Visit From Bill Gates

Bill Gates visited Princeton on Friday, accompanied by his father, a prominent Seattle lawyer who now heads the Gates Foundation, and by Kevin Schofield, a Microsoft exec (and Princeton alumnus) who helped to plan the university visits.

After speaking briefly with Shirley Tilghman, Princeton's president, Mr. Gates spent an hour in a roundtable discussion with a smallish group of computer science faculty. I was lucky enough to be one of them. The meeting was closed, so I won't give you a detailed play-by-play. Essentially, we told him about what is happening in computer science at Princeton; he asked questions and conversation ensued. We talked mostly about computer science education. Along the way I gave a quick description of the new infotech policy course that will debut in the spring. Overall, it was a good, high-energy discussion, and Mr. Gates showed a real passion for computer science education.

After the roundtable, he headed off to Richardson Auditorium for a semi-public lecture and Q&A session. (I say semi-public because there wasn't space for everybody who wanted to get in; tickets were allocated to students by lottery.) The instructions that came with my ticket made it seem like security in the auditorium would be very tight (no backpacks, etc.), but in fact the security measures in place were quite unobtrusive. An untrained eye might not have noticed anything different from an ordinary event. I showed up for the lecture at the last minute, coming straight from the faculty roundtable, so I one of the worst seats in the whole place. (Not that I'm complaining – I certainly wouldn't have traded away my seat in the faculty roundtable for a better seat at the lecture!)

After an introduction from Shirley Tilghman, Mr. Gates took the stage. He stood alone on the stage and talked for a half-hour or so. His presentation was punctuated by two videos. The first showed a bunch of recent Princeton alums who work at Microsoft talking about life at Microsoft in a semi-serious, semi-humorous way. (The highlight was seeing Corey in a toga.) The second video was a five-minute movie in which Mr. Gates finds himself in the world of Napoleon Dynamite. It co-stars Jon Heder, who played Napoleon in the movie. I haven't seen the original movie but I'm told that many of the lines and gags in the video come from the movie. People who know the original movie seem to have found the video funny.

The theme of the lecture was the seamless coolness of the future computing environment. It was heavy on promotion and demonstrations of Microsoft products.

The Q&A was pretty interesting. He was asked how to reconcile his current cheerleading for C.S. education with his own history of dropping out of college. He had a funny and thoughful answer. I assume he's had plenty of chances to hone his answer to that question.

A student asked him a question about DRM. His answer was fairly general, talking about the importance of both consumer flexibility and revenue for creators. He went on to say some harsh things about Blu-Ray DRM, saying that the system over-restricted consumers' use and that its content-industry backers were making a mistake by pushing for it.

(At this point I had to leave due to a previous commitment, so from here on I'm relying on reports from people who were there.)

Another student asked him about intellectual property, suggesting that Microsoft was both a beneficiary and a victim of a strong patent system. Mr. Gates said that the patent system is basically sound but could benefit from some tweaking. He didn't elaborate, but I assume he was referring to patent reform suggestions Microsoft has made previously.

After the Q&A, Mr. Gates accepted the "Crystal Tiger" award from a student group. Then he left for his next university visit, reportedly at Howard University.

Tagged:  

Tax Breaks for Security Tools

Congress may be considering offering tax breaks to companies that deploy cybersecurity tools, according to an Anne Broache story at news.com. This might be a good idea, depending on how it's done.

I've written before about the economics of cybersecurity. A user's investment in security protects the user himself; and he has an incentive to pay for the efficient level of protection for himself. But each user's security choices also affect others. If Alice's computer is compromised, it can be used as a springboard for attacking Bob's computer, so Alice's decisions affect Bob's security. Alice has little or no incentive to invest in protecting Bob. This kind of externality is common and leads to underinvestment in security.

Public policy can try to fix this by adjusting incentives in the right direction. A good policy will boost incentives to deploy the kinds of security measures that tend to protect others. Protecting oneself is good, but there is already an adequate incentive to do that; what we want is a bigger incentive to protect others. (To the extent that the same steps tend to protect both oneself and others, it makes sense to boost incentives for those steps too.)

A program along these lines would presumably give tax breaks to people and organizations that use networked computers in a properly secure way. In an ideal world, breaks would be given to those who do well in managing their systems to protect others. In practice, of course, we can't afford to do a fancy security evaluation on each taxpayer to see whether he deserves a tax break, so we would instead give the break to those who meet some formalized criteria that serve as a proxy for good security. Designing these criteria so that they correlate well with the right kind of security, and so that they can't be gamed, is the toughest part of designing the program. As Bruce Schneier says, the devil is in the details.

Another approach, which may be what Rep. Lundgren is trying to suggest in the original story, is to give tax breaks to companies that develop security technologies. A program like this might just be corporate welfare, or it might be designed to have a useful public purpose. To be useful, it would have to lead to lower prices for the right kinds of security products, or better performance at the same price. Whether it would succeed at this depends again on the details of how the program is designed.

If the goal is to foster more capable security products in the long run, there is of course another approach: government could invest in basic research in cybersecurity, or at least it could reverse the current disinvestment.

Tagged:  

Virtual Worlds: Only a Game?

I wrote yesterday about virtual worlds, and the inevitability of government intervention in them. One objection to government intervention is that virtual worlds are only games; and it doesn't make sense for government to intervene in games.

Indeed, many members of virtual worlds want the worlds to be games that operate at some remove from the real world. Games are more fun, they say, when what happens in the game doesn't have real-world consequences. This was a common topic of discussion at State of Play.

The crux of this issue is the status of the in-world (i.e., in the virtual world) economy. Players can accumulate in-world stuff, including in-world currency, and they can trade in-world stuff for in-world currency. (A world might be designed without an identified currency, but it's fairly certain that one in-world commodity would emerge as a consensus currency anyway.) Is in-world money just Monopoly money, or is it in some sense real money?

The only sensible answer is that it's real money if it's readily exchangeable for real-world currency. If you can trade in-world gold pieces for U.S. dollars (or Euros, etc.), and vice versa, then in-world gold is real money, and the in-world economy is a real economy.

If the world-designer wants to keep the world's economy from becoming real, then, the designer must stop members from exchanging in-world currency for real currency. And this seems pretty much impossible, because there is no way to stop players from making side payments in the real world. Suppose Alice has gold pieces and Bob has dollars, and they want to trade. Bob transfers the dollars to Alice via a real-world channel (perhaps PayPal); virtual Alice gives virtual Bob the gold pieces. In-world, all that happens is a gift of gold from Alice to Bob. The dollar transfer isn't visible to the world's management. The world-designer can ban gifts of gold, but Alice and Bob can work around that ban by having Alice "lose" the gold in a private place where Bob will find it, or by cooking up a sham transaction where Alice buys a virtual toothpick from Bob at an inflated price.

Experience seems to show that any sufficiently popular in-world currency will become exchangeable for real money, whether the world-designer likes it or not.

There's a useful lesson here about the limitations of code as a law-enforcement mechanism. One might think that code is law in a virtual world, in the sense that the world-designer writes the software code that defines what is possible in the world. It would be hard to think of a situation where code had more power to control behavior than in a virtual world. And yet the code can't separate the virtual world from the real world. The reason it fails to do so is that the code doesn't define the whole domain of human action; and people can defeat the code's would-be restrictions by acting outside the code's domain of control.

Once a virtual world gets big enough, and people value in-world stuff highly enough, it can no longer be just a game. The virtual world will touch the real world, along a sort of border through which money and communication flow.

Tagged:  

Virtual World, Meet Terrestrial Government

Something remarkable is happening in virtual worlds. These are online virtual "spaces" where you can play a virtual character, and interact with countless other characters in a rich environment. It sounds like a harmless game, but there's more to it than that. Much more.

When you put so many people into a place where they can talk to each other, where there are scarce but desirable objects, where they can create new "things" and share them, civilization grows. Complex social structures appear. Governance emerges. A sophisticated economy blooms. All of these things are happening in virtual worlds.

Consider the economy of Norrath, the virtual world of Sony Online Entertainment's EverQuest service. Norrath has a currency, which trades on exchange markets against the U.S. dollar. So if you run a profitable business in Norrath, you can trade your Norrath profits for real dollars, and then use the dollars to pay your rent here in the terrestrial world. Indeed, a growing number of people are making their livings in virtual worlds. Some are barely paying their earth rent; but some are doing very well indeed. In 2003, Norrath was reportedly the 79th richest country in the world, as measured by GDP. Richer than Bulgaria.

(Want to try out a virtual world? SecondLife is a smaller but interesting world that offers free membership. They even have a promotional video made by members.)

Virtual worlds have businesses. They have stock markets where you can buy stock in virtual corporations. They have banks. People have jobs. And none of this is regulated by any terrestrial government.

This can't last.

Last weekend at the State of Play conference, the "great debate" was over whether virtual worlds should be subject to terrestrial laws, or whether they are private domains that should determine their own laws. But regardless of whether terrestrial regulators should step in, they certainly will. Stock market regulators will object to the trading of virtual stocks worth real money. Employment regulators will object to the unconstrained labor markets, where people are paid virtual currency redeemable for dollars, in exchange for doing tasks specified by an employer. Banking regulators will object to unlicensed virtual banks that hold currency of significant value. Law enforcement will discover or suspect that virtual worlds are being used to launder money. And tax authorities will discover that things are being bought and sold, income is being earned, and wealth is being accumulated, all without taxation.

When terrestrial governments notice this, and decide to step in, things will get mighty interesting. If I ran a virtual world, or if I were a rich or powerful resident of one, I would start planning for this eventuality, right away.

Tagged:  

Cost Tradeoffs of P2P

On Thursday, I jumped in to a bloggic discussion of the tradeoffs between centrally-controlled and peer-to-peer design strategies in distributed systems. (See posts by Randy Picker (with comments from Tim Wu and others), Lior Strahilevitz, me, and Randy Picker again.)

We've agreed, I think, that large-scale online services will be designed as distributed systems, and the basic design choice is between a centrally-controlled design, where most of the work is done by machines owned by a single entity, and a peer-to-peer design, where most of the work is done by end users' machines. Google is a typical centrally-controlled design. BitTorrent is a typical P2P design.

The question in play at this point is when the P2P design strategy has a legitimate justification. Which justifications are "legitimate"? This is a deep question in general, but for our purposes it's enough to say that improving technical or economic efficiency is a legitimate justification, but frustrating enforcement of copyright is not. Actions that have legitimate justifications may also have harmful side-effects. For now I'll leave aside the question of how to account for such side-effects, focusing instead on the more basic question of when there is a legitimate justification at all.

Which design is more efficient? Compared to central control, P2P has both disadvantages and advantages. The main disadvantage is that in a P2P design, the computers participating in the system are owned by people who have differing incentives, so they cannot necessarily be trusted to work toward the common good of the system. For example, users may disconnect their machines when they're not using the system, or they may "leech" off the system by using the services of others but refusing to provide services. It's generally harder to design a protocol when you don't trust the participants to play by the protocol's rules.

On the other hand, P2P designs have three main efficiency advantages. First, they use cheaper resources. Users pay about the same price per unit of computing and storage as a central provider would pay. But the users' machines a sunk cost – they're already bought and paid for, and they're mostly sitting idle. The incremental cost of assigning work to one of these machines is nearly zero. But in a centrally controlled system, new machines must be bought, and reserved for use in providing the service.

Second, P2P deals more efficiently with fluctuations in workload. The traffic in an online system varies a lot, and sometimes unpredictably. If you're building a centrally-controlled system, you have to make sure that extra resources are available to handle surges in traffic; and that costs money. P2P, on the other hand, has the useful property that whenever you have more users, you have more users' computers (and network connections) to put to work. The system's capacity grows automatically whenever more capacity is needed, so you don't have to pay extra for surge-handling capacity.

Third, P2P allows users to subsidize the cost of running the system, by having their computers do some of the work. In theory, users could subsidize a centrally-controlled system by paying money to the system operator. But in practice, monetary transfers can bring significant transaction costs. It can be cheaper for users to provide the subsidy in the form of computing cycles than in the form of cash. (A full discussion of this transaction cost issue would require more space – maybe I'll blog about it someday – but it should be clear that P2P can reduce transaction costs at least sometimes.)

Of course, this doesn't prove that P2P is always better, or that any particular P2P design in use today is motivated only by efficiency considerations. What it does show, I think, is that the relative efficiency of centrally-controlled and P2P designs is a complex and case-specific question, so that P2P designs should not be reflexively labeled as illegitimate.

Tagged:  

"Centralized" Sites Not So Centralized After All

There's an conversation among Randy Picker, Tim Wu, and Lior Strahilevitz over the U. Chicago Law School Blog about the relative merits of centralized and peer-to-peer designs for file distribution. (Picker post with Wu comments; Strahilevitz post) Picker started the discussion by noting that photo sharing sites like Flickr use a centralized design, rather than peer-to-peer. He questioned whether P2P design made sense, except as a way to dodge copyright enforcement. Wu pointed out that P2P designs can distribute large files more efficiently, as in BitTorrent. Strahilevitz pointed out that P2P designs resist censorship more effectively than centralized ones.

There's a subtlety hiding here, and in most cases where people compare centralized services to distributed ones: from a technology standpoint, the "centralized" designs aren't really centralized.

A standard example is Google. It's presented to users as a single website, but if you look under the hood you'll see that it's really implemented by a network of hundreds of thousands of computers, distributed in data centers around the world. If you direct your browser to www.google.com, and I direct my browser to the same URL, we'll almost certainly interact with entirely different sets of computers. The unitary appearance of the Google site is an illusion maintained by technical trickery.

The same is almost certainly true of Flickr, though on a smaller scale. Any big service will have to use a distributed architecture of some sort.

So what distinguishes "centralized" sites from P2P designs? I see two main differences.

(1) In a "centralized" site, all of the nodes in the distributed system are controlled by the same entity; in a P2P design, most nodes are controlled by end users. There is a technical tradeoff here. Centralized control offers some advantages, but they sacrifice the potential scalability that can come from enlisting the multitude of end user machines. (Users own most of the machines in the world, and those machines are idle most of the time – that's a big untapped resource.) Depending on the specific application, one strategy or the other might offer better reliability.

(2) In a "centralized" site, the system interacts with the user through browser technologies; in a P2P design, the user downloads a program that offers a more customized user interface. There is another technical tradeoff here. Browsers are standardized and visiting a website is less risky for the user than downloading software, but a custom user interface sometimes serves users better.

The Wu and Strahilevitz argument focused on the first difference, which does seem the more important one these days. The bottom line, I think, is that P2P-style designs that involve end users' machines make the most sense when scalability is at a premium, or when such designs are more robust.

But it's important to remember that the issue isn't whether the services uses lots of distributed computers. The issue is who controls those computers.

Tagged:  
Syndicate content