March 19, 2024

Nuts and Bolts of Network Discrimination

One of the reasons the network neutrality debate is so murky is that relatively few people understand the mechanics of traffic discrimination. I think that in reasoning about net neutrality it helps to understand how discrimination would actually be put into practice. That’s what I want to explain today. Don’t worry, the details aren’t very complicated.

Think of the Internet as a set of routers (think: metal boxes with electronics inside) connected by links (think: long wires). Packets of data get passed from one router to another, via links. A packet is forwarded from router to router, until it arrives at its destination.

Focus now on a single router. It has several incoming links on which packets arrive, and several outgoing links on which it can send packets. When a packet shows up on an incoming link, the router will figure out (by methods I won’t describe here) on which outgoing link the packet should be forwarded. If that outgoing link is free, the packet can be sent out on it immediately. But if the outgoing link is busy transmitting another packet, the newly arrived packet will have to wait – it will be “buffered” in the router’s memory, waiting its turn until the outgoing link is free.

Buffering lets the router deal with temporary surges in traffic. But if packets keep showing up faster than they can be sent out on some outgoing link, the number of buffered packets will grow and grow, and eventually the router will run out of buffer memory.

At that point, if one more packet shows up, the router has no choice but to discard a packet. It can discard the newly arriving packet, or it can make room for the new packet by discarding something else. But something has to be discarded.

(This is one illustration of the “best effort” principle, which is one of the clever engineering decisions that made the Internet feasible. The Internet will do its best to deliver each packet promptly, but it doesn’t make any guarantees. It’s up to software that uses the Internet Protocol to detect dropped packets and recover. The software you’re using to retrieve these words can, and probably often does, recover from dropped packets.)

When a router is forced to discard a packet, it can discard any packet it likes. One possibility is that it assigns priorities to the packets, and always discards the packet with lowest priority. The technology doesn’t constrain how packets are prioritized, as long as there is some quick way to find the lowest-priority packet when it becomes necessary to discard something.

This mechanism defines one type of network discrimination, which prioritizes packets and discards low-priority packets first, but only discards packets when that is absolutely necessary. I’ll call it minimal discrimination, because it only discriminates when it can’t serve everybody.

With minimal discrimination, if the network is not crowded, lots of low-priority packets can get through. Only when there is an unavoidable conflict with high-priority packets is a low-priority packet inconvenienced.

Contrast this with another, more drastic form of discrimination, which discards some low-priority packets even when it is possible to forward or deliver every packet. A network might, for example, limit low-priority packets to 20% of the network’s capacity, even if part of the other 80% is idle. I’ll call this non-minimal discrimination.

One of the basic questions to ask about any network discrimination regime is whether it is minimal in this sense. And one of the basic questions to ask about any rule limiting discrimination is how it applies to minimal versus non-minimal discrimination. We can imagine a rule, for example, that allows minimal discrimination but limits or bans non-minimal discrimination.

This distinction matters, I think, because minimal and non-minimal discrimination are supported by different arguments. Minimal discrimination may be an engineering necessity. But non-minimal discrimination is not technologically necessary – it makes service worse for low-priority packets, but doesn’t help high-priority packets – so it could only be justified by a more complicated economic argument, for example that non-minimal discrimination allows forms of price discrimination that increase social welfare. Vague arguments that you have to reserve some fraction of capacity for some purpose won’t cut it.

[Postscript for networking geeks: You might complain that it matters not only which packets are dropped but also which packets are forwarded first, and so on. True enough. I simplified things a bit to fit within a blog post; but it should be fairly obvious how to expand the principle I’m describing here to deal with the issues you’re raising.]

Comments

  1. Juniper routers were notorious for reordering packets when under high loads, for instance.

    The real issue is, best-effort networks are so efficient and good at what they do that it becomes impossible to sell “value-added” higher QoS services. That’s the main reason ATM is dying – on overengineered answer to a question no one bothers to ask.

    Thus, if you want to sell your tiered service, you have to artificially degrade the basic service. As Brad Templeton rightly notes, there are several ways to do that short of overt maximal discrimination, and telcos are very good at finding loopholes or work-arounds legislation. After all, lawyers, more than engineers represent their core competency since at least 1984.

    Of course, in a truly competitive market, being the first to degrade baseline service is a prescription for disaster. In countries with competitive broadband like Japan and most of Europe, you don’t hear about the outlandish proposals of Whitacre et al. Those are surefire signs of monopoly rent-seeking.

    Network neutrality laws are palliatives, but not a real solution to the root cause of the problem, limited competition. Of course, given the FCC is doing everything to restrict choice of service providers on DSL and cable in the polyanna-ish belief that somehow alternative infrastructure will materialize if given its own incentive of oligopoly rents, the omens are not good, at least not in the US.

  2. Oops, I forgot to add, that those tools I build are for TESTING purposes. (They are based on Jon Postel’s notion of a “flakeway’.) I’d be pretty ticked off if someone were to use ’em as a predatory tool.

  3. It’s interesting that packet re-ordering has been mentioned.

    I have built (and sell) tools that can do various forms of network impairments ranging from simple drop, delay, jitter, reordering to actually modifying packets (e.g inserting IP options, changing TCP sequence numbers, etc.)

    A lot of folks who have built software, particularly in VOIP devices, seem to have forgotten that the net not merely can, but frequently does, do nasty things like reordering packets. And their software often can’t cope.

    (Reordering comes from things like the fast and slow paths of routers, ARP delays, non FIFO queing, alternate routes or load-balanced parallal links, etc.)

    A surprising number of VOIP devices have trouble with packets that arrive out of order. And when I say trouble, I mean things ranging from crashing to merely generating an audible artifact.

    Reordering also changes the amount of delay or variable delay (jitter) and thus damages the quality of a conversational VOIP discussion. This adds another tool to the toolbox of providers that want to make competing traffic worse then their own. (Other things that a provider can do include dropping some packets, adding a constant delay to packets, adding a variable delay to packets [jitter], duplicating some packets, fragmenting packets, inserting or modifying options or flags [e.g. lowering the Tos/QoS bits], etc)

    One of the things limiting our, or a provider’s, ability to play too fast and loose with packet flows is that a there is more than a trivial amount of junk code out there that may react very badly (or fail) as flows start to diverge from lab-like ethernet perfection. In other words, what we do in the middle of the net, even if OK per the RFC’s, can, and often does, have negative effects on edge devices. (It also suggests that the quality of software in network devices, particularly those on the edge, is in need of improvement.)

  4. Neo has an interesting model. Probably an accurate one judging from the ISPs behavior, especially considering this would allow them to degrade the sites they don’t like (e.g., cable companies slow down youtube). I don’t think legislation will solve the problem, but government sponsored competition in the form of universal wifi is another story.

  5. @James,

    Thanks for prodding me on the packet reordering issue. I’ll address it in a future post.

  6. Dan Maas says “I think competition between ISPs will limit the advantage of selectively degrading access, at least in markets with more than a handful of providers.”

    And which markets would those be? I am not aware of any (ignoring dialup providers)…most have exactly 2 non-dialup providers, the area phone co and the area cable co, and both of those have triply-split incentives!

  7. Wes Felter says

    Dan, the cost of bandwidth is much smaller than the cost of maintaining the local loop, so even if BellSouth kicked out the BitTorrenters they wouldn’t increase profit much.

    Isn’t minimal discrimination the same thing as work-conserving scheduling?

  8. I think competition between ISPs will limit the advantage of selectively degrading access, at least in markets with more than a handful of providers. But ISP competition will not prevent end-user price discrimination (as in my last post), because no profit-seeking ISP wants those heavy users :).

    (As much as we like to moan and scream about internet access getting more expensive, we should recognize that heavy users like ourselves have been getting very large subsidies from light users…)

  9. Brad’s scenario seems pretty plausible to me, with the addition that there’s a lot of revenue to be made by degrading access to web sites that haven’t purchased direct-connection services from you. If you see various big sites each allying themselves with a particular network provider, you’ll know that we’re well on the way to the old walled-garden model, with a few tiny gates providing passage from one to another. At that point, the competition among broadband suppliers will be for a small set of high-income urban users who use services from which the supplier can take a cut (as Brad says, cable tv: delivering eyeballs to advertisers).

  10. What I fear is that they will decide to switch to selling “Basic” broadband, identified as for web, email and a few other apps (but no video, voip or large file transfer) at a very low price ($15/month) and then sell “Extended” broadband which is an unregulated pipe at a much higher price for those want it.

    Then they will charge VoIP and video and file transfer companies money to provide high bandwidth services to the “Basic” customers, so that the basic customers feel they are getting all the “major” services. This will remind you of cable tv.

    Want the long tail and new innovative apps? You have to buy the extended or premium access. This would turn the internet into cable tv.

  11. Dan, that’s not what we’re talking about here. If BellSouth wants to charge it’s heaviest using customers more, that’s fine, but what the companies in question want to do is charge other companies (like Google) (who are not their customers) for better access to their customers (or perhaps, extorionately, not to degrade said access).

  12. Dan Maas says

    Yesterday’s WSJ had a quote from a BellSouth executive to the effect that 1% of their internet customers occupy 40% of their bandwidth (probably heavy BitTorrent users :). I think the whole point of network discrimination is to shut off that 1% (by forcing them to other ISPs), or at least to charge them much higher prices. Assuming that BellSouth’s costs are roughly proportional to bandwidth, this would nearly double their profitability.

  13. One assumption/constraint that seems implicit here (but ought, I think, to be made explicit) is that today’s internet isn’t really the adaptive-routing cloud that many people have been taught. For large parts of the trip between any given pair of leaf nodes, the path you get is pretty much the path you’re stuck with: any alternative routing would incur a serious cost in performance and consistency. The best routes may change over time, but not on timescales that are interesting to individual packets or (most) sessions.

    That, of course, makes a difference to congestion management, because in the single-path model, ultimately routers are throttling user traffic rather than redirecting it. Meanwhile, how large a buffer do typical routers have compared to the tens-of-seconds timeouts typical of most/many tcp applications?

  14. I like the idea of minimal discrimination. Historically this is one of the defining characteristics of the internet.

    The more that commercial interests implement non-minimal discrimination for profit the further we get away from the definition of the internet and the closer we get to a set of privately controlled though interdependent networks, better to call it something else like the commercialnet.

    I know we’ve already headed partway in that direction. There is an incentive for all these companies who are taking advantage of the amazing emergent “public” internet for commerical gain to “add features.” The question is how can we avoid this slippery slope?

  15. Ned Ulbricht says

    Maximal packet drop would occur when a flow is blackholed.

    I think it’s useful to define minimal packet drop as occuring only when some network resource (queue space) is exhausted. Then we can recognize that there may be different optimal packet drop rates to minimize latency or jitter, or to maximize throughput.

  16. @Karl,

    In a blog post for a general audience I had to leave out lots of technical details, including things like RED. Those were the sorts of issues I was hinting at in the postscript for network geeks at the end. I think it’s reasonably clear how to extend the conceptual framework of the main post to deal with RED. You would allow a modest and conventional amount of RED dropping, with lower-priority packets presumably being dropped first.

    The key question is whether the network operator is dropping packets only for congestion control purposes, or whether it sometimes drops packets to keep low-priority traffic from getting better performance than the network operator wants it to get.

  17. [Random text to stop it calling it a duplicate; first posting failed with a screenful of cryptic error messages but the second one then failed with “duplicate comment detected” — damned machines!]

    Karl Auerback writes:
    “We have learned that congestion management can require what seem to be counter-intuitive actions, like someimes throwing away perfectly good packets.”

    Actually, it is “upper management” that requires such actions; their purpose isn’t to provide a service however but to make as much money as possible by doing as little as possible to go beyond merely providing the appearance of providing a service. After all, providing a real service requires actual effort, and may raise consumer expectations and additionally produce complaints, lawsuits, liability of some kind, or other such responsibilities that inevitably accompany actually doing something, and we wouldn’t want that would we? It would detract from the bottom line.

    “TCP implementations, if they are truely and honestly implemented, do in fact have an altruistic component embedded in their algorithms that does make them respond well to methods such as RED.”

    But the most common TCP implementation on end-user computers is the one that comes with Windows, instead. This may be lamentable but it is also a fact of life that router engineers have little ability to influence.

    “slow access links (e.g. typical DSL or cable)”

    Wake up and smell the bandwidth — these are “really fast access links” from the end-user POV, since no ordinary citizen is permitted to have anything faster (that takes either a lot of money or being corporate). Only VIPs get what you consider non-slow. Whoa, you didn’t think we were living in a real, equal-opportunity free-market democracy did you? Ain’t no such beast.

    Also, you seem to be suggesting that the problem is how to prioritise VOIP so it is fast enough. The problem, as the big companies see it, is how to prioritise VOIP so it is degraded and sucky enough that people will continue to pay their big fat traditional-phone bills instead of switching to VOIP. Likewise, how to prioritise P2P so it is degraded and sucky enough that people will continue to pay $25 for a 15-cent piece of plastic at Tower Records. And so on. The problem, in short, is how to keep ever cheaper bandwidth and other nice things in the control of big corps and out of the hands of individuals who aren’t in stratospheric tax brackets so that they can continue to charge them an arm and a leg for phone service, content, and so forth. Keeping people from having access to alternatives to traditional distribution/service models for all these products is essential if the various middlemen involved are to continue to stomp around and shed scales instead of being buried under a thick layer of iridium and service tunnels full of cheap fibre optics. The big satellite TV providers (whose only “service” provided is to control and meter access; whether you actually sign up with them or not you are still taking up several tens of square metres of the signal footprint!), phone companies, RIAA, and MPAA are all agreed that cheap bandwidth to consumers is anathema. Cable companies too — they are TV providers first and ISPs second, and they don’t want their ISP division actually competing against their digital cable subscriptions via people torrenting their shows instead of paying through the nose for them. They actually have THREE-way split incentives, since there’s also advertising, which people avoid by torrenting content instead of getting and watching it the traditional way, which means the cable companies also don’t want the TV show getting onto anything like an open-ended PC; a PVR whose design and software they completely control is much better, since it can enforce ad-watching and do other things that provide no added value to the consumer without the consumer removing those “features”.

    If you think the debate is over how to best provide value to the consumer, you’re dead wrong. It’s over how to best provide value to their stockholders, and prop up the crumbling foundations of the old parasites in the content-publishing-and-distribution and metered-long-distance sectors, which in a truly free market would long since have gone extinct, being obsolete since the first TCP router was turned on several decades ago.

  18. Karl Auerback writes:
    “We have learned that congestion management can require what seem to be counter-intuitive actions, like someimes throwing away perfectly good packets.”

    Actually, it is “upper management” that requires such actions; their purpose isn’t to provide a service however but to make as much money as possible by doing as little as possible to go beyond merely providing the appearance of providing a service. After all, providing a real service requires actual effort, and may raise consumer expectations and additionally produce complaints, lawsuits, liability of some kind, or other such responsibilities that inevitably accompany actually doing something, and we wouldn’t want that would we? It would detract from the bottom line.

    “TCP implementations, if they are truely and honestly implemented, do in fact have an altruistic component embedded in their algorithms that does make them respond well to methods such as RED.”

    But the most common TCP implementation on end-user computers is the one that comes with Windows, instead. This may be lamentable but it is also a fact of life that router engineers have little ability to influence.

    “slow access links (e.g. typical DSL or cable)”

    Wake up and smell the bandwidth — these are “really fast access links” from the end-user POV, since no ordinary citizen is permitted to have anything faster (that takes either a lot of money or being corporate). Only VIPs get what you consider non-slow. Whoa, you didn’t think we were living in a real, equal-opportunity free-market democracy did you? Ain’t no such beast.

    Also, you seem to be suggesting that the problem is how to prioritise VOIP so it is fast enough. The problem, as the big companies see it, is how to prioritise VOIP so it is degraded and sucky enough that people will continue to pay their big fat traditional-phone bills instead of switching to VOIP. Likewise, how to prioritise P2P so it is degraded and sucky enough that people will continue to pay $25 for a 15-cent piece of plastic at Tower Records. And so on. The problem, in short, is how to keep ever cheaper bandwidth and other nice things in the control of big corps and out of the hands of individuals who aren’t in stratospheric tax brackets so that they can continue to charge them an arm and a leg for phone service, content, and so forth. Keeping people from having access to alternatives to traditional distribution/service models for all these products is essential if the various middlemen involved are to continue to stomp around and shed scales instead of being buried under a thick layer of iridium and service tunnels full of cheap fibre optics. The big satellite TV providers (whose only “service” provided is to control and meter access; whether you actually sign up with them or not you are still taking up several tens of square metres of the signal footprint!), phone companies, RIAA, and MPAA are all agreed that cheap bandwidth to consumers is anathema. Cable companies too — they are TV providers first and ISPs second, and they don’t want their ISP division actually competing against their digital cable subscriptions via people torrenting their shows instead of paying through the nose for them. They actually have THREE-way split incentives, since there’s also advertising, which people avoid by torrenting content instead of getting and watching it the traditional way, which means the cable companies also don’t want the TV show getting onto anything like an open-ended PC; a PVR whose design and software they completely control is much better, since it can enforce ad-watching and do other things that provide no added value to the consumer without the consumer removing those “features”.

    If you think the debate is over how to best provide value to the consumer, you’re dead wrong. It’s over how to best provide value to their stockholders, and prop up the crumbling foundations of the old parasites in the content-publishing-and-distribution and metered-long-distance sectors, which in a truly free market would long since have gone extinct, being obsolete since the first TCP router was turned on several decades ago.

  19. My opinions:
    * Non-minimal discrimination should be banned.
    * Price-discrimination never serves any social welfare purpose; only corporate welfare purposes. (Tax-discrimination, on the other hand, via income brackets…)
    * The best minimal discrimination may be to prioritize packets that arrived a bit ago, but not quite a while ago, and to give least priority to packets that arrived ages ago. It’s most likely these packets, if sent on, would be useless anyway because the apps at each end already gave up on it arriving (timed out or whatever). Likely implementation is to have part (half?) of the buffer a FIFO queue, and packets that fall off this queue land in a LIFO queue occupying the remainder. Packets that fall off that queue … fall off.

  20. May I suggest that you have taken what is perhaps a rather narrow view of internet congestion management?

    What you have described is the older view that looks at the net as a collection of independent routers, each making its own, uncoordinated congestion management choices.

    However congestion of the net is not merely a local matter. The net needs to be viewed as a distributed system in which there are feedback loops of various kinds. We have learned that congestion management can require what seem to be counter-intuitive actions, like someimes throwing away perfectly good packets.

    For example, there’s RED – Random Early Drop – this is a technique in routers that discards TCP (not UDP) packets of TCP flows that are consuming more than their “fair share” of bandwidth. The intent is not to prevent exhaustion of local buffers in a router, although that is a side effect, but rather to induce the TCP engine in the sending computer to back off and slow down (and thus giving an opportunity to other TCP flows.)

    And there is the relatively new “explicit congestion notification” bit at the IP layer that is a forward-moving indication that congestion has been encountered that, when it hits the destination TCP engine, results in a back pressure on the sender to get it to slow down.

    As is true on highways, were every sender (or every automobile) to adopt a purely self-interested approach no matter what it does to everyone in the aggregate, the net (or highways) would not work as well as they could.

    TCP implementations, if they are truely and honestly implemented, do in fact have an altruistic component embedded in their algorithms that does make them respond well to methods such as RED.

    But then we come to conversational voice, VOIP on top of UDP.

    Unfortuantely (for purposes of networking) voice has very tight time constraints – about 150 milliseconds each way. And with packetization delays at the sender consuming nearly 20% of that time budget even before each packet leaves the phone, toll quality conversational VOIP, especially on slow access links (e.g. typical DSL or cable) requires that voice packets not get stuck behind long queues of big, slow HTTP packets.

    My own conclusion on the network discrimination focuses not on the fact of discrimination, which I find to be a useful tool for congestion management and real-time conversational traffic, but rather on who is in control of that discrimination and for what purpose.

    To my mind, if the discriminatory aspect is under the control of the end-to-end users and their purpose is to improve (or enable) whatever it is they are trying to use the net for, then I consider it more benign than discrimination that is imposed by a provider in order to increase its revenue or market share.

    Yes, I know that in practice this is a very fuzzy distinction and there’s really not a lot of difference between a provider making a premium class of service available for extra money in order to gain market share (and thus “bad” by my metric) and users chosing to use that premium service (and thus “good” by my metric.) But as I said, I think discrimination, even in the absence of buffer exhaustion, does have solid technical reasons to exist, at least at the low-bandwidth edges of the net.

    (I also recognize that allowing users to even have a paid-for choice is rather non-egalitarian in that it ultimately means that the wealthy get a better internet.)

  21. As a lay reader, I appreciate this very much. I was heretofore unfamiliar with these ideas. Not being a network engineer or business person, however, I just hope I’ll someday find a way to put this basic knowledge to some use.

  22. Worth mentioning that dropping packets unecessarily worsens the bottlenecks behind the dicscriminating router, because host has to resend.

    May be there’s a ‘good faith’ issue here; the practice causes problems elsewhere in the network by actually forcing a traffic increase.

  23. The packet-reordering issue isn’t just for network geeks. When you present the technical details this simply and clearly, the issue is apparent enough to interest the lay reader, too.

  24. As I noted in my post about your earlier article, Ed, even banning non-minimal discrimination does not stop them from making two tiers, because they can just make two pipes into their network. The old pipe, as fast as it is today, but never any faster, would peer to the broad internet in a network neutral way. The new pipe, the one they charge Google and others for access to, will be fatter.

    The old pipe can saturate, and be forced to drop packets, if everybody runs high-bandwidth video downloads over it, while the new pipe is unloaded.

    This is already what they do today. Companies like Akamai (and Google) buy extra pipes into major ISPs to assure lots of bandwidth. The only change would be a deliberate decision on the part of the ISPs not to improve the capacity of their pipe to their regular internet peers.

    This would make them an inferior ISP, of course, as other ISPs naturally increase their main pipes as bandwidth gets cheaper or they get more customers or whatever. And the monopoly ISPs (ILECs, Cable) could be regulated to stop them from not improving their networks at other industry rates. History of that is not good though. In the end, if videocompany.com wants to reliably send video to the customers of monopoly ISP, they may have to pay for space in a non-public pipe, screwing them and the user.