Multichannel News and Broadcasting & Cable hosted "Advanced Advertising" on Dec. 10 at the Roosevelt Hotel in New York. (Photos by Mark Reinertson)
May Your Buffer Never Bloat
Buffer bloat is a big thing in the lives of the people who work on network protocols and big-iron router stuff . Some even smoosh it into one word: Bufferbloat.
In theory and in practice, buffers are meant to smooth out; to level. They’re short-term storage, when packets — the envelopes of data transmission — move from source to destination.
But when buffers get overrun with bits, they themselves cause delays. The remedy becomes the culprit. That’s buffer bloat.
REASON: VIDEO FILES
Buffer bloat is on the rise because of how much video we’re sending and receiving, over the Internet, from the two-minute clip shot on a smartphone, to the Netflix stream, to the live video coming from whichever webcam, to wherever we are.
Every time a packet transits a network, it runs into buffers. The “big iron” routers that run the Internet juggle billions of packets, from hundreds of thousands of different places, all the time.
Their job is to see where each packet is going, and to find the best route to get it there. As routers route, packets pile up in buffers — more so when volume is heavy.
Video is heavy to begin with, relative to a phone call or a webpage request. And think about how much more video you’re doing on your phones and tablets than you did two years ago.
It’s getting to be a problem because the usual elixir — “Throw more bandwidth at it!” — isn’t enough. At issue is a tenet of how stuff moves over the Internet, using TCP/IP (Transmission Control Protocol over Internet Protocol), which requires an acknowledgement of every packet sent. (In the lingo, they go by “acks.”)
Turns out that roundtrip time (RTT) affects network performance as much or more so than available bandwidth. Latency trumps capacity.
Help is on the way, of course, under a general mantle of “active queue management.”
One remedy, developed by Google, is a protocol called “SPDY” (pronounced as “speedy” and not an acronym for anything), aims to improve on HTTP (Hypertext Transport Protocol) by using fewer TCP connections.
WHAT SPDY DOES
Ever get a “connection timeout” when loading a webpage? (The maximum number of active connections in HTTP is six. Who knew?) SPDY fixes that, by multiplexing (smooshing) the next connection onto an existing connection. Also in SPDY: compression and prioritization mechanisms. Some browsers (Chrome, Firefox, Opera) already use it.
Another goes by “CoDel,” for “controlled delay.” People pronounce it as a word: “Coddle.” Its inventors, Kathleen Nichols (girl power!) and Van Jacobson, describe it as a “no knobs” way of keeping delays low, even during big traffic bursts. They published a chewy paper about it called “Controlling Queue Delay.”
Discussing the paper with ARS Technica writer Iljitsch Van Beijnum last May, Jacobson dropped this tasty tidbit: “Things would probably go fastest if we had some interested party who would apply it, for example, in the cable data edge network.”
Sounds like a gauntlet thrown! Either that, or maybe the Internet needs a Fitbit.