Clout Research just released a poll in Oregon. I’ll let you read it for yourself but in a nutshell, it says that Kate Brown’s approval ratings are heading underwater, being a sanctuary state is unpopular with Oregonians, and voters want spending cuts to fix Oregon’s budget crisis. If true, this is good news for Conservative reformers but I have my reservations about the accuracy of Clout’s polling.
Clout Research is the new name of a firm previously known as Wenzel Strategies. Wenzel has a horrific polling record which has been well-documented. Since they changed their name, Clout’s polling has only improved marginally.
In May, Clout did a poll Oregon that showed Trump with a 44–42 lead over Clinton. While anything is technically possible, Trump ever having led in Oregon is highly unlikely. If Clout’s polling was accurate, Trump having lost Oregon would be one of the most unreported stories of the 2016 election. Trump claimed he would win Oregon but failed to do so.
Clout’s polling improved in October but by that time, a number of additional Oregon polls had already been released. It’s possible Clout engaged in a common (but bad) polling practice called herding. Nate Silver from FiveThirtyEight has a good explanation of herding:
Herding is the tendency of some polling firms to be influenced by others when issuing poll results. A pollster might want to avoid publishing a poll if it perceives that poll to be an outlier. Or it might have a poor methodology and make ad hoc adjustments so that its poll is more in line with a stronger one.
The problem with herding is that it reduces polls’ independence. One benefit of aggregating different polls is that you can account for any number of different methods and perspectives. But take the extreme case where there’s only one honest pollster in the field and a dozen herders who look at the honest polling firm’s results to calibrate their own. (For instance, if the honest poll has the Democrat up by 6 points, perhaps all the herders will list the Democrat as being ahead by somewhere between 4 and 8 points.) In this case, you really have just one poll that provides any information — everything else is just a reflection of its results. And if the honest poll happens to go wrong, so will everyone else’s results.
Clout doesn’t publish their methodology or crosstabs so nobody has the ability to see if Clout’s polling is any good much beyond what I’ve already described. FiveThirtyEight has given Clout a C- in their pollster ratings which Clout has disputed. I found it interesting they attacked Nate Silver for supposed inaccuracy in the 2016 election and Super Bowl when they themselves also have an incredibly spotty history predicting elections with their numbers. I covered the value of crosstabs way back in 2015 when I wrote All About Polling: Part One.
I hope Clout continues to improve their polling numbers and releases their methodology and crosstabs. Until then, we can only use what we know to evaluate their polling. What we know about Clout they are right on occasion but have a much longer history of being inaccurate. Don’t put too much faith in this one poll.