Metrics Matter

Why Establishing Standards for QoE Metrics Is Crucial for Online Video
Publish date:
Social count:

Despite every effort to deliver the best user experience, how do you really know it’s the best if you’re not measuring the right metrics?

According to Cisco’s Visual Networking Index, consumer Internet video traffic will account for 80% of all global consumer Internet traffic in 2019. Consumers have voted with their eyeballs and binge-watching is here to stay. With TVs now wider than the average loveseat, and rapidly approaching sofa-size, achieving the broadcast-quality that viewers demand is anything but a trivial feat.

Since you can’t optimize what you can’t measure, the first step is clearly to gather video quality metrics. But which metrics? “Good” and “great” are relative, especially with no industry standards for measurement and statistical aggregation.

Quick - How do you calculate a rebuffer ratio? What about percentage of buffer-impacted-views? Is your reported average bitrate a measure of all bits on the wire, or of the labeled video bitrate?

Achieving consensus on units of measure matters.  Did you know that Napoleon Bonaparte was actually not short for his time? The myth started because he was documented as being 5’2” tall, but they were using pre-French Revolution units of measure. In modern units, he was actually nearly 5’7”, which was taller than the average Frenchman of the era!

The good news for measuring the quality of online video is we can borrow substantially from other industries’ experiences establishing experience metrics in hopes of bringing about standards in a much faster fashion.

For example, it took nearly 20 years after the introduction of digital cable for the Society of Cable Telecommunications Engineers (SCTE) to publish industry standards to provide cable operators with reliable tools to measure and improve the customer experience. The first Web APM vendors hit the scene decades ago, not long after the surge in popularity of HTML and the web. The primary approaches initially included monitoring on the server-side and measuring periodically from distributed test servers located in data centers. This allowed operations teams to know when their servers were completely down or consistently slow, but failed to catch most other issues.

Over time, measuring the actual end-user experience became crucial and as a result, Real User Monitoring (RUM) was developed. RUM involves sending actual end-user performance data from the browser back to a central location for collection. Initially, all implementations were “roll your own” and ensuring support and data consistency across all browsers was impossible.

In 2010, the Web Performance Working Group (W3C) was founded to create standards for measuring and improving web application performance. Their initial focus was on making metrics available to web developers via client-side APIs. To date, they have been successful at ushering in many excellent specifications for in-browser performance diagnostics such as Navigation Timing, Resource Timing and User Timing. However the collection, or beaconing, was quite literally “left as an exercise for the reader.” On Feb. 9, 2016, they addressed this by publishing a draft of the Beacon API, a specification that defines how to collect this valuable data without impacting performance.

In short, it took a significant amount of time to acknowledge the need for, create and fine-tune standards for performance and quality metrics.  We are just now beginning to realize the benefits of standardization.

Online video metrics that measure customer experiences –  often referred to as QoE (Quality of Experience) metrics –  cannot be accurately measured from the server delivering the content. Many experience-degrading factors occur between the end-user and the server, including congestion and resource exhaustion on the end-user’s side. Therefore, beaconing, or the sending of telemetry from the playback device, has emerged as a more accurate approach. But the problem is that there are no standard definitions of the metrics or standard APIs for accessing them across players.

Further complicating things is the breadth of devices and applications that can play video – browsers, smartphones, tablets, set-top boxes, Smart TVs and more. The video players also vary and can be custom, third party or sometimes native to the device. Not surprisingly, some or all of the metrics of interest aren’t accessible in every case, and with contemporaneous industry initiatives to encourage cross-platform compatibility, increased security and device stability, the problem is getting worse.

Where do we go from here? 

Fortunately, the Streaming Video Alliance, an organization seeking to develop, publish and promote open standards, policies and best practices in the video streaming ecosystem, is working to promote standards for QoE metrics for this industry. The group is currently working on rigorous definitions of end-user quality metrics and statistical aggregations for rolling up millions of such data points. Not the most exciting work, but you have start somewhere, and establishing a common language should pay dividends on its own.

Sound familiar? Hopefully this time we will make progress faster.  The solution is left as an exercise for the reader.

Jason Hofmann is a Senior Director, Advanced Architecture at Limelight Networks, a leading global content delivery network and founding member of the Streaming Video Alliance.