
Dolby recently helped Vizio kick off the launch of its long-awaited high-end Reference Series TVs, which will be sold via Vizio’s web site and delivered by custom installers.
These new TVs will be the first to implement high dynamic range (HDR) as presented under a system developed by Dolby Labs. Some have called Dolby Vision the gold standard for HDR presentation, offering images with greater brightness and color accuracy than systems based on baseline HDR standards, that some have referred to as “plain vanilla.”
To help understand some of the differences that we will soon be able to observe between competing HDR systems, we caught up with Roland Vlaicu, Dolby Labs consumer imaging VP (pictured at top), at Vizio’s recent Reference Series launch event for a Q&A interview on this complicated topic.
Our interview with Vlaicu follows after the jump:
HD Guru: Dolby Vision is being introduced for the first time on Vizio’s Reference series TVs enabled for high dynamic range (HDR), but Vizio says its sets do not include HDMI 2.0a inputs at this time. These inputs are required to carry baseline HDR metadata between source and display devices. So, how is it possible for these sets to present Dolby Vision HDR?
Vlaicu: We had anticipated the challenge of getting high dynamic range (HDR) and HDR metadata over HDMI interfaces. So what we did was develop our own technology that tunnels all the way through HDMI interfaces back to version 1.4, including the signaling and the metadata. So for televisions that have HDMI inputs and present Dolby Vision signals, as a requirement from us, the HDMI inputs have to support Dolby Vision in addition to the on-board OTT apps that support Dolby Vision and in order to make that work we developed in-band signaling as well as the ability to send 12-bit video over what is effectively an 8-bit interface. All this is implemented in televisions and storage devices that support Dolby Vision.
The reason for that is not that we want to undermine HDMI, but because we wanted to have a solution that works right away without having to wait for a new HDMI standard. In the meantime, we have been working with the CEA, the industry and SMPTE in refining and standardizing these mechanisms so that they would gradually flow into HDMI. The first piece of evidence was the first SMPTE standards that we standardized (ST 2084) for the signaling for the Electro Optical Transfer Function (EOTF) [method in which digital codes are transformed into visible light] as well as the static mastering metadata have since been adopted into HDMI 2.0a, so that basically, gradually, there will be a standardized way for HDMI to send a signal that we are already sending through our own mechanism for using HDMI.
So, eventually, once this is all done, there will be a few different ways to accomplish the same thing. It was a timing issue. We just didn’t want to wait for the standardization of HDMI 2.0a.
HD Guru: So, will the baseline BT. 2084 HDR standard require HDMI 2.0a?
Vlaicu: The CEA standard that specifies baseline SMPTE 2084 requires HDMI 2.0a and it is up to the manufacturer to include support for those. From our perspective of deliverables we offer a package that we call Dolby Vision VS 10, which is our HDR playback solution that offers the ability to support multiple inputs. One is Dolby Vision dual layer, as we see with Vudu and Netflix. The other is the Dolby Vision single layer, which provides an operating mode for Dolby Vision using just a single layer that retains the 12-bit fidelity even though the transport is 10-bit HEVC and it offers the full ability to use dynamic metadata, which right now is unique to Dolby Vision. In addition to those two formats, there is the option to implement support for generic HDR metadata.
The reason we have those two solutions is so that all of our playback devices are always compatible with both single-layer as well as dual-layer profiles. The choice on the content distribution side as to which profile to use depends on whether you would require backward compatibility.
If you require backward compatibility you would use the dual layer, because then the base layer can be standard dynamic range Rec. 709. But if you don’t require backward compatibility then you can use the single layer. The dual layer gives backward compatibility but it comes at the cost of a little bit more bandwidth required for the enhancement layer. If you choose single layer, the benefit is that it will be a little more bandwidth efficient, but then it doesn’t have the backward compatibility built into the scheme.
Best Selling Soundbars and 5.1 Surround Systems
If you don’t need it, you don’t have to use it, and you have full flexibility. The single layer would also give you the ability to offer the baseline SMPTE standard BT 2084 and the Dolby Vision single layer HDR on top of that. So Dolby Vision has the ability to ride on a standard dynamic range base layer or a generic HDR base layer, whatever the content distributor chooses.
So, for example, the Blu-ray Disc Association (BDA) has mandated that any HDR disc always start from a generic HDR base layer and Dolby Vision can ride on top of that.
In another case, Vudu, for example could use one version using the Dolby Vision system that plays back on any device. Another example where backward compatibility might be needed is for broadcast set-top box creative designs where the transmissions have to have backward compatible performance in order to play back on existing devices.
A possible third case is when the resolution is not actual 4K UHD but is HD, because Dolby Vision also works with HD, and if you wanted to, for example, enhance an HD service with Dolby Vision HDR, you could float the Dolby Vision enhancement layer on the regular HD signal.
HD Guru: How is Dolby Vision better than baseline HDR?
Vlaicu: There are a couple of things: As we developed the technology behind the SMPTE 2084 and 2086 standards, the 2084 standard described the PQ curve, which is the EOTF optimized for HDR. As part of that work, we identified that a container with a peak brightness of 10,000 nits and a color gamut, which includes Rec. 2020, is desirable, and adopted that for Dolby Vision.
The next question was, `how many bits do you need to represent that?’ With the help of the PQ curve, which is more optimized by dynamic range than gamma is, we were able to fit that into a 12-bit signal representation. This 12-bit signal representation in our opinion is the minimum you should use to represent any sort of HDR signal to get to the Dolby Vision level standard of quality.
This is because 12-bits has 4096 sets. If you drop that down to 10 bits, you end up with just 1,024. So just because it sounds like you are only taking 2-bits away, and that is no big deal, it is effectively 75 percent of the signal. That is why we feel a 12-bit representation is important, and Dolby Vision features that in all applications, where the generic approaches are limited to 10-bit because they are piping HDR through a regular 10-bit HEVC pipe in a plain vanilla way.
The other thing is dynamic metadata. The dynamic metadata, which we developed, sits on top of the static metadata, and is a scene-based set of metadata that is regenerated for every single scene. Every time the scene cuts and there is an edit, there is a new set that informs the display management engine inside the television how to best map the color volume of the source content, where the color volume is the combination of the dynamic range and the color gamut. We master at a very high level.
This mapping algorithm is complicated because you are mapping color gamut and dynamic range at the same time and you want to stay close to the artistic intent and retain color accuracy. In order to do that well you need this dynamic metadata to inform the display management engine of the mapping decisions it makes for every single scene.That is what sets us apart from generic HDR approaches.
We also enable consistent performance of Dolby Vision across a wide array of television hardware specifications. So for example, a very high-end TV like the Vizio Reference Series does a wonderful job of representing the Dolby Vision image. But then if you think about future televisions with more limited capabilities, in order to do a good job with HDR and wide color in the image, you need all of the intelligence inside the Dolby Vision system.
And that is even more important on lower-end sets than it is on high-end sets, because high-end sets are designed to perform well from the get-go. Entry level and mid-range sets will have more challenges. With Dolby Vision dynamic metadata we can show that we utilize that hardware to the fullest extent given the intelligence of the system and fidelity of the source input. I think that TV manufacturers are beginning to understand the value that we deliver, because the farther away you get from the high end, the more important the color mapping and dynamic range mapping becomes.
By Greg Tarr
Have a question for HD Guru? Email us.
Greg Tarr
Related posts
Recent Posts

Stay connected