MBPT Spotlight: Local Ratings and Sheep Stabilization
Nielsen’s effort to “reduce variation” via rating stabilization in local market ratings has spurred considerable discussion – and debate – among media buyers and companies alike.
Originally scheduled to begin in August, this initiative has been put on hold for this year. This is a good thing. Through statistical smoothing, rating stabilization could have hidden problems in two other big changes that were scheduled concurrently — implementation of viewer assignment (modeling of demographics) and the introduction of the code reader (an audio-listening meter replacing the diary in many markets). Instead we will have a clearer understanding of how these two changes directly impact the data before the proposed ratings stabilization takes effect.
So what exactly is the rating stabilization initiative? It’s the "smoothing" of rating data under certain conditions. For example, if the HUT (households using television) for the quarter hour and the station/network household (HH) rating are not statistically different from the previous week, the published rating for all demos for that entity in that quarter hour will become a weighted average of the current and previous week. The intent of this change is to reduce variation or “statistical bounce” in the weekly rating data. Sounds great, but it’s a potentially disastrous idea.
Indulge me an imperfect and somewhat simplistic — but illustrative — rating stabilization allegory.
Back in the day, buyers were having a heck of a time trying to buy sheep from sellers who were spread across the country. It was danged inconvenient and hard to verify.
Ol’ AC, a sharp businessman who wanted to start a trading house, came up with a brilliant idea. Why not store all the sheep in one place and standardize their measurements? So AC came up with a pen system to do just that — sellers could bring their sheep to the pen, buyers could have a look and negotiations could be made with confidence.
It all worked dandy for a while. Eventually AC retired and successors took over. They tried to keep the place up, but things started to get out of hand in the sheep-trading biz. Buyers and sellers figured out that some sheep were better for specific purposes — sheep with long soft hair made better wool for stockings, while sheep with coarser hair were better for sweaters. Some people just wanted rare luxury wool from sheep in small, out-of-the-way places.
Broadcasting & Cable Newsletter
The smarter way to stay on top of broadcasting and cable industry. Sign up below
Along with the sheep segmentation, the fence system started to show its age. Big sheep were jumping the fences into other pens. Others burrowed under. Rot began to weaken the fences.
In any given week, some pens seemed to have a higher sheep count than they should, while in others, sheep inexplicably disappeared. Sometimes the sheep stood around the edge of the fence giving the appearance of an empty, a so-called “zero pen” situation.
Displeasure ensued. Buyers and sellers clamored for consistency and less variation in the pen counts. AC’s successors began looking for ways to improve the yard. But they didn’t go to fence builders for better methods and materials. Instead, they tried an alternative and hired a “statistics-slicker” from the big city. And he came up with a grand idea: sheep stabilization.
Yep. It was his determination that the weekly tallies should be stabilized using statistics — never mind the problems with counting sheep in the pen. He recommended that the tally each week should be compared to the previous week. If the numbers weren’t too different, the count should systematically be changed up or down so the weekly counts were closer together, reducing the differences in the figures. After all, there was often no apparent reason for them to be different. AC’s successors thought this was generally a good idea — even noting this system wouldn’t stabilize the sheep counts that were wildly different than the previous week — preserving estimates for known big sheep-sales events like the “Sheeper Bowl” and the “Lammys.”
Similarly in the real world, some believe that ratings stabilization will make buyers’ and sellers’ lives easier — there is an attractive management logic to the idea that many program audiences should be similar from week to week. However, buyers and sellers should be skeptical. Rating stabilization doesn’t really fix the cause of variation in the ratings. It just looks like it.
The supposed benefits of stabilization, reduction in variation and zero cell and increased precision in ratings are illusions. Stabilization would do nothing more than create a new artificial dataset that hides the deficiencies of the current one.
All of the so-called improvements are predictable and solely a result of the mathematics involved. There is no change in methodology or improvement in technique and the underlying data remains as variable as it ever was. Implementation of stabilization is truly the industry kidding itself; the willful use of a faux data set because it doesn’t like the real one.
Perhaps the most insidious side effect of rating stabilization would have been the capacity to hide or minimize the problems of methodology changes that were to be concurrently implemented. It is reasonable to expect there will be issues with the impending implementation of viewer assignment and the code reader.
The industry’s best interest lies in an honest look at these changes, without the obfuscation that would certainly occur by introducing rating stabilization’s smoothing effects at the same time. Thankfully, it appears that will be the case with the announcement of the pushback in implementation of rating stabilization.
Everybody wants improvements to the yard. Nielsen’s honest efforts should be encouraged and applauded. However, now recognizing that rating stabilization is no real improvement at all — but a pretense — rating stabilization shouldn’t just be postponed, but abandoned.