March 29, 2024

Predictions for 2011

As promised, the official Freedom to Tinker predictions for 2011. These predictions are the result of discussions that included myself, Joe Hall, Steve Schultze, Wendy Seltzer, Dan Wallach, and Harlan Yu, but note that we don’t individually agree with every prediction.

  1. DRM technology will still fail to prevent widespread infringement. In a related development, pigs will still fail to fly.
  2. Copyright and patent issues will continue to be stalemated in Congress, with no major legislation on either subject.
  3. Momentum will grow for HTTPS by default, with several major websites adding HTTPS support. Work will begin on adding HTTPS-by-default support to Apache.
  4. Despite substantial attention by Congress to online privacy, the FTC won’t be granted authority to mandate Do Not Track compliance.
  5. Some advertising networks and third-party Web services will begin to voluntarily respect the Do Not Track header, which will be supported by all the major browsers. However, sites will have varying interpretations of what the DNT header requires, leading to accusations that some purportedly DNT-respecting sites are not fully DNT-compliant.
  6. Congress will pass an electronic privacy bill along the lines of the principles set out by the Digital Due Process Coalition.
  7. The seemingly N^2 patent lawsuits among all the major smartphone players will be resolved through a grand cross-licensing bargain, cut in a dark, smoky room, whose terms will only be revealed through some congratulatory emails that leak to the press. None of these lawsuits will get anywhere near a courtroom.
  8. Android smartphones will continue gaining market share, mostly at the expense of BlackBerry and Windows Mobile phones. However, Android’s gains will mostly be at the low end of the market; the iPhone will continue to outsell any single Android smartphone model by a wide margin.
  9. 2011 will see the outbreak of the first massive botnet/malware that attacks smartphones, most likely iPhone or Android models running older software than the latest and greatest. If Android is the target, it will lead to aggressive finger-pointing, particularly given how many users are presently running Android software that’s a year or more behind Google’s latest—a trend that will continue in 2011.
  10. Mainstream media outlets will continue building custom “apps” to present their content on mobile devices. They’ll fall short of expectations and fail to reverse the decline of any magazines or newspapers.
  11. At year’s end, the district court will still not have issued a final judgment on the Google Book Search settlement.
  12. The market for Internet set-top boxes like Google TV and Apple TV will continue to be chaotic throughout 2011, with no single device taking a decisive market share lead. The big winners will be online services like Netflix, Hulu, and Pandora that work with a wide variety of hardware devices.
  13. Online sellers with device-specific consumer stores (Amazon for Kindle books, Apple for iPhone/iPad apps, Microsoft for Xbox Live, etc.) will come under antitrust scrutiny, and perhaps even be dragged into court. Nothing will be resolved before the end of 2011.
  14. With electronic voting machines beginning to wear out but budgets tight, there will be much heated discussion of electronic voting, including antitrust concern over the e-voting technology vendors. But there will be no fundamental changes in policy. The incumbent vendors will continue to charge thousands of dollars for products that cost them a tiny fraction of that to manufacture.
  15. Pressure will continue to mount on election authorities to make it easier for overseas and military voters to cast votes remotely, despite all the obvious-to-everybody-else security concerns. While counties with large military populations will continue to conduct “pilot” studies with Internet voting, with grandiose claims of how they’ve been “proven” secure because nobody bothered to attack them, very few military voters will cast actual ballots over the Internet in 2011.
  16. In contrast, where domestic absentee voters are permitted to use remote voting systems (e.g., systems that transmit blank ballots that the voter returns by mail) voters will do so in large numbers, increasing the pressure to make remote voting easier for domestic voters and further exacerbating security concerns.
  17. At least one candidate for the Republican presidential nomination will express concern about the security of electronic voting machines.
  18. Multiple Wikileaks alternatives will pop up, and pundits will start to realize that mass leaks are enabled by technology trends, not just by one freaky Australian dude.
  19. The RIAA and/or MPAA will be sued over their role in the government’s actions to reassign DNS names owned by allegedly unlawful web sites. Even if the lawsuit manages to get all the way to trial, there won’t be a significant ruling against them.
  20. Copyright claims will be asserted against players even further removed from underlying infringement than Internet/online Service Providers: domain name system participants, ad and payment networks, and upstream hosts. Some of these claims will win at the district court level, mostly on default judgments, but appeals will still be pending at year’s end.
  21. A distributed naming system for Web/broadcast content will gain substantial mindshare and measurable US usage after the trifecta of attacks on Wikileaks DNS, COICA, and further attacks on privacy-preserving or anonymous registration in the ICANN-sponsored DNS. It will go even further in another country.
  22. ICANN still will not have introduced new generic TLDs.
  23. The FCC’s recently-announced network neutrality rules will continue to attract criticism from both ends of the political spectrum, and will be the subject of critical hearings in the Republican House, but neither Congress nor the courts will overturn the rules.
  24. The tech policy world will continue debating the Comcast/Level 3 dispute, but Level 3 will continue paying Comcast to deliver Netflix content, and the FCC won’t take any meaningful actions to help Level 3 or punish Comcast.
  25. Comcast and other cable companies will treat the Comcast/Level 3 dispute as a template for future negotiations, demanding payments to terminate streaming video content. As a result, the network neutrality debate will increasingly focus on streaming high-definition video, and legal academia will become a lot more interested in the economics of Internet interconnection.

Comments

  1. Good list of predictions.

    I hope those regions using machines that count votes electronically put a stop to it. Votes cast by the people need to be counted by the people, even if it’s slower. Votes counted by software mean that the public no longer has access to the vote-counting process. Having it done by closed-source software accessible by a select few is no better than hiring a consulting firm with unknown biases or affiliations to count the votes in lieu of the public.
    And even if the software were open-source, there is no transparent way for the general public to assure themselves that the machines are using the published version on election day.

    Computer hobbyists know full well all the places you can hide code that can distort the count, which would require a truly exceptional auditor to scour every location for trickery.
    So, depending on a software audit is very risky.

    The only proven and effective hedge against corruption is to distribute the counting and checking process among thousands of human volunteers, in full view of scrutineers representing the different candidates or parties.

    To quote Stalin:
    “I consider it completely unimportant who in the party will vote, or how; but what is extraordinarily important is this — who will count the votes, and how.”

    • Indeed. An open source evoting system is the perfect real-world use-case for a Ken Thompson hack.

  2. Why are the comments on “Some Technical Clarifications About Do Not Track” closed? I want to reply to one of the comments there.

    The comment is this:

    Comment by Andrew on January 25th, 2011 at 5:47 pm.

    The huge majority of the web works fine when you block cross domain cookies.

    However, you can still be easily tracked by image beacons, flash cookies, browser finger printing.

    and my reply would be to suggest blocking cross-domain images below a certain size, or even making all cross-domain elements (and all plugin requiring elements, like Flash, whether cross-domain or not) appear as placeholders at first that you can click on to view; no traffic would be generated to the element’s host until and unless the user chose to display that element. As for browser finger printing, I already suggested that browsers shouldn’t leak so much unique information in their request headers; almost none of it is actually needed by a typical site to properly fulfill an http request.

    • Comments are automatically closed two weeks after a post is made in order to keep us from having to go back and deal with spammers that spam every single old post (which as been a problem in the past). Perhaps we should extend that period.

      • Just put a captcha on all such comment posts, or on all comment posts, and any spam problems will go away.

        • …intelligently even (google “mollum”), and it does not solve the problem.

          Spammers now pay humans to solve captchas, among many other tricky tactics.

          I’ve actually been meaning to do a blog post about this.

          • Oh, really? Then where is all the spam this site should be getting? Apparently you have a solution that stops any appearing on even the recent articles — so, closing the comments on the older articles is not the solution you’re using that actually works, whether or not the captcha also isn’t.

            So why not drop all but the solution that’s actually working?

          • Clueless+entitled is not a good combination.

          • Our spam protection has to be multifactor, unfortunately. It also involves an annoyingly high level of human intervention. I really will write a blog post about this soon.

          • Who are you calling “clueless”? I hope you realize that your comment looks like an ad hominem response to the argument that was made for getting rid of the reply window.

            I seem to remember the comments here being much more vibrant before that “feature” was added, with some posts getting hundreds of relevant comments. And that there wasn’t a whole lot of spam. Whatever filtering you were using then is obviously adequate so I am mystified that you didn’t stick with it and instead changed to one that has significantly dampened the community discussions.

  3. How can you call Amazon/Kindle “device specific?” You can read Kindle books on at least five platforms, depending on how you mince words about iPhone/iPad being different platforms.

    • I think the intent here is to differentiate the Kindle scheme (where you buy from Amazon and then have a relatively limited set of viewing options) from something like Amazon.com’s MP3 store, where you get something that works everywhere without restrictions.

  4. Peter Moulder says

    The “fail to prevent widespread infringement” line is just a cheap shot: it may be true, but one could equally say that police fail to prevent widespread infringement of laws, or that safety equipment fails to prevent widespread injuries; by itself, that isn’t enough reason to do away with police or safety equipment, or for a company to decide not to deploy DRM. If no better argument can be worked into a prediction then you might as well remove this.

    • Without police, there would be much higher rates of property crime and violent crime. Without safety equipment, there would be a much higher rate of injuries and deaths. Without DRM, there would be exactly as much copyright infringement.

  5. Damian Ondore says

    1. Innovations; both good ( API’s, repositories, applications, devices, whistleblowing websites, methods of delivery ) and bad ( viruses, malware, methods of delivery ) will continue to appear at last year’s frantic pace, driven by clear and defined market needs, and by the revenue associated with the need.

    2. Stewards of innovation in the form of corporations and publishers will attempt to protect their revenue and limit access to it by others, with patents and anti-competitive behaviour

    3. Regulators will continue to attempt to impose limits on the destructive possibilities of the efforts of these corporations.

    4. The regulators will be hampered in their attempts because of the market need for the innovations, and because of the influence of the corporations.

  6. re: #8
    And in good news for wine drinkers, wine remains the most popular drink in the US, with wine continuing to out-sell any specific brand of beer.

    It’s not Samsung against Apple: it’s iPhone against Android, and iPhone is going to lose.

  7. While I do think social media websites will soon switch to HTTPS-by-default, I sincerely doubt we’ll soon see the majority of the web implement it. It’s not a server CPU resource issue. The problem is the length of time required to establish the connection.

    SSL takes 12 packets to establish a connection, compared to just 3 packets for a TCP connection. That means it takes four times longer to initiate an SSL connection. For a website that’s 100ms away, that’s a very long time. And for a website that’s 250ms away, that’s an absolute eternity. Imagine waiting over 3s to load Google.com each time you want to search.

    There’s not a whole lot of room for improvement. Services like Akamai can sit in front of a website, with nodes all over the world. Clients connect to these nodes that are very close to them, and these nodes hold persistent HTTPS connections open to the server, and requests get pipelined over that. But that gets expensive, fast.

    There are some other things that could bring down latencies, such as reduced network congestion, or optical switching. But at some point you start to hit absolute limits, like the speed of light in an optical fiber, which can be 50% slower than the speed of light in a vacuum. Light in a fiber can travel perhaps 15,000 miles in 100ms.

    • …but Adam Langley is. And he says:

      In January this year (2010), Gmail switched to using HTTPS for everything by default. Previously it had been introduced as an option, but now all of our users use HTTPS to secure their email between their browsers and Google, all the time. In order to do this we had to deploy no additional machines and no special hardware. On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead. Many people believe that SSL takes a lot of CPU time and we hope the above numbers (public for the first time) will help to dispel that.

      http://www.imperialviolet.org/2010/06/25/overclocking-ssl.html

      I have no idea if that conflicts with the figures you’re stating, but it does seem quite possible to switch to SSL/TLS in an efficient fashion.

      • He posted more info in response to an F5 claim that SSL is expensive unless you buy their stuff:
        http://www.imperialviolet.org/2011/02/06/stillinexpensive.html

        […]
        SSL is just not that computationally expensive any more. Here are the real costs of HTTPS deployment these days:

        Virtual hosting still doesn’t work in the real world because Microsoft never put support into Windows XP.
        Sorting out mixed content issues on your website.

        The F5 article does mention the first of these, but SSL hardware doesn’t help with either of them.

        All sites should deploy HTTPS because attacks like Firesheep are too easy to do. Even sites where you don’t login should deploy HTTPS (imagine the effect of spoofing news websites at a major financial conference to headline “Market crashes”). You should use HSTS to stop sslstrip. But you are probably not the sort of organisation which needs to worry about multi-million dollar attacks aimed at factoring your key.

    • Glen Turner says

      Your figures are about right, the 14,000Km Sydney-California undersea SDH link takes about 90ms.

      The SSL people are pretty serious about reducing RTTs, which is good. But even major protocols like HTTP have missed the move of the networking bottleneck from bandwidth to latency; for example, not streaming the images following the HTML page which uses them, rather paying a RTT to save bandwidth and leaving it to the client to request the images.

      Content distribution networks are easily the best way to reduce latency. They also help considerably in reducing the bandwidth-delay product, and thus give better performance where client computers have inadequate TCP buffers (eg, the default on Windows Xp is 16KB, about right for a 100Mbps LAN but nowhere near the 4MB-16MB required for high speed Internet access). Google and Facebook are large enough to have built their own CDNs; Akamai and others hire their CDN to content providers; there is no significant public CDN infrastructure.

      HTTPS Everywhere is somewhat incompatible with a public CDN infrastructure. So in supporting HTTPS Everywhere there is the risk that you are indirectly supporting a two-tier Internet — sites which can afford to build or hire a CDN and have good end-to-end performance and sites that cannot. It’s not a strong argument against HTTPS Everywhere, but it is a risk that should be monitored.

      You should also ask HTTPS To Where? The widespread use of for-hire CDNs means that your HTTPS connection often does not run back to the firm on the certificate but to a CDN provider. This may not be a bad thing, since the CDN provider often provides better security than the content provider, but it is not what is written on the box.

  8. John Wendt says

    “With electronic voting machines beginning to wear out ,… incumbent vendors will continue to charge thousands of dollars for products that cost them a tiny fraction of that to manufacture.”

    Is there anything about voting that precludes the use of custom open-source software and COTS hardware, with appropriate encryption?

    • trsm.mckay says

      Short answer, security concerns precludes the use of COTS hardware.

      Longer answer depends upon the definition of “COTS”. We could theoretically design a system and encourage multiple vendors to produce hardware that will satisfy the system requirements. The details of what the hardware needs depend upon the design, but likely they will need several physical security measures, and multiple levels of security processors (obviously crypto, but also used for confidentiality, integrity, and secure API reasons). If you have a big enough user base and an appropriately detailed standard, than by some definitions this could produce COTS, albeit with specialized features, hardware.

  9. NPRs Marketplace ran a story yesterday about IPv4 addresses running out very soon ( http://marketplace.publicradio.org/display/web/2011/01/25/pm-internet-running-out-of-digital-addresses/ ). Any prediction on how close we’ll get to consumer-level IPv6 service in 2011?

    • The answer depends on how close you think consumer-level (NAT+RFC1918)^n based Internet service is to consumer-level IPv6…?

      If your answer is “very close” or “identical,” then this is going to be your year! If not (and you don’t live in one of a few places in France, Netherlands, or Japan, then this may not be your decade.