The debate swirling about emerging distributed CCAP (converged cable access platform) architectures is a good news/bad news story today.
The good news? There are multiple options that can be made to work. The not so good news? It’s not yet clear which approach is “right” one, as some might work better than another depending on the cable operator and its individual needs.
But there is agreement on what’s driving the cable networking world toward more distributed forms of the CCAP, which packs in the functions of the cable modem termination system and the edge QAM to support all services. The general thought is that more distributed architectures will help operators support the surging bandwidth demand that’s being driven by IP video, gigabit broadband services, Ultra HD video, WiFi hotspot proliferation and the so-called Internet of Things. Cable headends space is expected to become tighter and tighter as more capacity is needed if the industry sticks with integrated CCAP architectures.
While there’s agreement that distributed approaches will help to deal with this future headend space-crunch, there are a set of possible options that have emerged:
- Remote MAC/PHY that puts the edge QAM in the node
- Remote MAC/PHY that doesn’t place the eQAM in the node
- Remote PHY, which places the PHY in the node and the MAC processing in the headend.
“All are good, none are perfect. All have some issues,” Tom Cloonan, chief technology officer—network solutions at Arris, said Thursday at a workshop titled: Distributed CCAP Architectures: Breaking Up Is Hard to Do.
Arris recently conducted a study that looked at the pros and cons of each approach, and picked 26 attributes that, Arris believed, are among the most important to the cable operator. That was then divided into four broader areas: opera tonal cost management, operational east of use, infrastructure compatibility, and design simplicity (which factored in vendor time-to-market). Arris’s engineers then applied scores that weighed how well each architecture served each.
Arris then tabulated those results to see how it all shook out. “What’s best?” Cloonan posed. “I don’t have the answer…In the end, I think that all of them are workable solutions and technically feasible.”
Given that, he expects that different MSOs will go in different directions and possibly bifurcate the supplier market.
Cisco Systems, meanwhile, is fairly fixated on which approach it favors – remote PHY, which relocates the physical layer components to the node.
John Chapman, Cisco fellow and CTO of the company’s cable business unit, discussed how resiliency can be applied to a remote PHY architecture, and said part of the debate is whether operators want to centralize the software running the system (with remote PHY) versus a more decentralized form with remote MAC/PHY that puts those software elements in the node.
“It’s fair to say that they both work,” Chapman said. “We can make anything work across all of the designs.”
Chapman noted that the remote PHY approach uses standardized interfaces to connect the headend to the remote PHY devices using pseudowires/tunneling protocols.
While that sort of approach might seem complex and “makes the brain hurt a little bit,” Chapman said remote PHY offers benefits that make it worth it, including higher reliability and the implementation of lower-cost digital optics, higher bit rates for DOCSIS 3.1 and lower plant maintenance costs.
Cisco, he said, is also thinking through how to make the architecture redundant so that there is no single point of failure. While nodes aren’t redundant elements, operators will have methods to apply that establish multiple backhauls to ensure that these nodes stay connected. Chapman’s suggested approach is to retain an active backup versus building an on-demand backup connection that might take a few milliseconds to fire up.
“It’s a design choice, but it’s not something we mandate,” he said, noting that Cisco will provide the tools to support multiple options. “All remote PHY is, is a pseudowire management scheme.”