(Please note the addendum to this post by Jeff, below. --ed)
Among other proposed changes, including tinkering with the number of teams and potentially dropping the preseason poll, the USA Today/Coaches Poll is considering going back to a secret ballot.
"Historically, we have never released the votes," AFCA executive director Grant Teaff said. "When it came up that, OK, it would be better if you did, I think there was acquiescing by the coaches. As to whether it's helped the poll or not, I don't think I can really say. Whether it's hurt it or not, I don't know. The only thing that we can base a decision on, as AFCA, is what the experts say about it.Here's the line that really caught me.
"We've obviously proven our loyalty to both the BCS and USA Today by releasing those," Teaff said. "But the question is whether that's the correct thing to do or not. Does that give us the way to have the best possible poll we can have?
"There's also a question of, should all voters be anonymous or not?" Teaff said.
The coaches' poll, which began in 1950, has often been the center of controversy. Critics have noted that voters have a financial stake in the outcome because their conferences benefit from drawing lucrative BCS berths. There are also questions of favoritism toward friends and bias against rivals.Consider it "found", Coach Teaff. Readers of BGS will recall that Jeff has broken down the Coaches' final tally the last couple of years and has found clear evidence of bias in the polls, towards the individual coach's conference, his opponents, and especially his own team.
"The perception is that there's a huge bias, and we've never really found that," Teaff said.
You can find the BGS examinations of the bias in the Coaches Poll here: 2005, 2006, 2007. As Jeff noted in his first conclusion:
It comes as no surprise that the Coaches Poll is fraught with bias. However, since this is the first year we actually get to see the results, it's still somewhat shocking to see such blatant gamesmanship laid bare. The supposed advantage of the BCS polls, and the Coaches poll in particular, is that you have a body of "football experts" who are ranking the teams; their vast experience and acumen is supposed to lend the poll unquestioned authenticity.That's pretty much what happened. By the 2007 study, we noted that transparency wasn't thinning the biases at all; in fact, several of the bias metrics actually worsened.
Unfortunately, with so much money at stake, with careers hanging in the balance, and with so much rampant conflict of interest, the Coaches Poll is anything but authentic and honestly considered. Perhaps by revealing the votes this year, egregious voters will check themselves a bit next year, but considering what's at stake (and the absence of censure), I tend to doubt it. As long as it's included in the calculus, the Coaches Poll will remain the most problematic component of the BCS.
I don't know what to think about the coaches going back to radio silence, other than they are simply afraid of transparency. The coaches' idea of "the best possible poll", as Teaff put it, apparently includes the right to vote for themselves and their friends with impunity, and without the annoyance of that pesky public scrutiny. Long live bias!
Addendum (by Jeff)
For the 2008 season, I never got around to publishing my bias analysis, but the story was exactly the same as the prior three years. Instead, what I intended to post was an analysis of the "accuracy" of the coaches poll. The bias in the poll is obvious to anyone who looks at the numbers (excluding Grant Teaff, of course). But, what surprised me is how inaccurate the poll actually is, particularly in the most important game of the season. Excerpts from the post I never finished are below.
...This year, I thought it would be interesting to see how accurate the coaches are, and I think I have a fairly reasonable way to evaluate this. Each coach has provided us a list of his top 25 teams (and by default, teams that he feels are 26th or worse). The current array of bowl games provides us with a host of neutral-site matchups, most of which were arranged somewhat indifferently through contracts and tie-ins with the conferences. So, with a little number crunching, it is pretty easy to evaluate how well the coaches did in their annual Pick-'em contest over the last four years: a better ranked team should beat a lesser one, and a ranked team should beat an unranked one. I omitted games between unranked teams.
Results. As it turns out, the coaches aren't all that good at picking games. There were 3,773 "predictions" which could be drawn over the last four years' bowl games. Of those, coaches accurately ranked the winner higher than the loser 1,955 times (less than 52% of the time). Bear in mind, the coaches are simply picking the winner, not picking against the spread.
BC(mes)S. I then considered the myriad of bowl games and the variety of teams out there and decided it may be more fair to look only at the BCS games. After all, those are the most important games involving the most well know teams. However, as it turned out, the coaches are actually worse in BCS games, with correct rankings less than half of the time (566-583, or 49.3%). You would be better off flipping a coin to figure out who is going to win a BCS game than looking at how coaches ranked the teams.
#1 vs #2. In all fairness to the coaches (or whoever does their voting), they aren't really that bad in all of the BCS games. Their average for non-championship games is actually a more respectable 57%, but their overall average is brought down by their horrible ability to pick the most important game in the country. Over the last four seasons, there were 46 instances where a coach accurately predicted the number one team in the county prior to the NC game, and 197 instances where they failed. That is less than 20% for those of you keeping track at home.
|#1 versus #2||2005||2006||2007||2008|
Looking at the individual seasons:
- In the 2005 season, 7 coaches placed Texas/Southern Cal correctly at #1/#2, while 55 reversed the order. Texas won a close battle in the Rose bowl 41-38.
- In 2006, all 61 coaches picked Ohio State number one, while 48 coaches picked Florida #2 and 18 picked them as #3. The result? a 41-14 drubbing of the unanimous #1 in the Fiesta Bowl.
- In 2007, 12 coaches correctly ranked LSU over Ohio State (and 48 did not), although only 3 of those 12 believed that Ohio State should have been ranked #2, the other coaches had them ranked between 3rd and 6th).
- In 2008, 27 coaches correctly picked Florda to win the NC, vs 33 that picked Oklahoma. Nine coaches felt that Florida didn't even belong in the game and ranked them 3rd. Not surprisingly six of those nine coaches were from Big XII schools, and all six picked Oklahoma vs another Big XII school instead of Florida (five coaches picked Texas, and Mike Leach put his own Texas Tech team #2, well above the consensus #8 ranking).
In any event, the coaches poll is both biased and inaccurate, and should be scrapped altogther. Although nothing is 100% accurate, other options are better and should be considered. Computer polls are much better than 50%, as is the algorithm used by the oddsmakers in Vegas. Even some "softer" approaches, like the NCAA basketball selection committee, are much more accurate. However, instead of fixing the problem, the NCAA is instead opting to sweep it under the rug, sending a fine message to students everywhere: do whatever you want as long as it is in your self interest; if your mistake becomes public, lie about it and cover up the evidence.