With recruiting at the forefront of college football headlines in recent weeks, I am reminded of a lingering question that I've never been able to answer: "How much difference does talent really make?" Many a struggling coach have taken cover behind the cloak of "deficient talent", and sometimes you'll hear an excuse offered such as, "just wait until [said coach] gets his own players in there." But each year we see a host of less-star-studded teams winning plenty of games, while some more glittery squads under-perform. So, how much difference does "talent" really make?
If your answer to this question (like the answer to life, the universe, and everything) is 42, you wouldn't be far off. Talent makes a +43 (percent) difference, as we'll see below.
Now, before we go any further, I'd like to throw in a couple of hundred caveats. This is not a precise analysis, nor is the data particularly solid (you can read more under Methodology below). But this was an interesting exercise that might, over time, prove to shine some light onto the value of talent, and the ability (or inability) of coaches to maximize its potential. Or, it might just be meaningless drivel that makes for fun posting on a blog. In any case, here goes nothing.
Data Mining. I used two sets of data for this analysis: Jeff Sagarin's Predictor Ratings to judge a team's performance (which seemed a more objective choice than any of the human polls) and the Rivals recruiting score to peg a team's talent rating.
Rivals has data on their web site going back to 2002, which means that we can look at teams' performance over the 2006-2008 time frame (teams in '06 had 5th years that were recruited in '02, seniors that were recruited in '03, juniors in '04, etc.). Rivals assigns a value to each class, which I loaded into a spreadsheet. Then I regressed it against Sagarin's rating for each team. Nothing too crazy there.
The Results. So let's look at what this means. Regression provides output which can easily be used for prediction, and the results fall exactly in line with what I'd expect. In general, the overall data does seem to have some predictive value and a relatively meaningful trend line can be drawn linking the two sets of data.
And here's a table listing the teams in order of performance.
The following teams were the Top 5 over-performing teams of 2006-08. These teams achieved a higher Sagarin rating than their talent would predict by the widest margin.
|Top 5: Overachievers 2006-2008 |
|Rank||Team||Performance relative |
No surprises here, this is essentially a list of the "BCS Busters" from the WAC and MWC over the past few years. Also at the top of the list were other teams that have come on strong from out of the blue over the past few seasons: Wake Forest, Rutgers, South Florida, Missouri, and Texas Tech were all in the Top 15. As we would also expect, no powerhouse programs made the top quartile of teams. Having class after class of highly rated recruits makes it difficult to over-achieve. Among the traditionally strong teams, Ohio State, Oklahoma, and Florida led the pack, ranked 34th to 37th overall (with Ball State sandwiched in the middle). These teams all overachieved by 5 to 6%, which is probably an admirable feat given the level of talent at those schools to begin with. Other notable powerhouse programs were Texas (ranked 44th, +2%), Southern Cal (47th, +2%), and LSU (52nd, 0%).
Of course, every list that has a top also has a bottom, which is not good news for ND fans. The bottom five (Eastern Michigan, Tulane, Utah State, Idaho, and North Texas) were all from weaker conferences, which I suspect is the result of a floor in the recruiting rankings, sort of like getting points for signing your name on the SATs (Methodology (1), below, has more on this). The bottom five are all teams that struggle in recruiting, so the points they got for "signing their name" are actually probably inflated, making their recruiting classes rate better than they should and thus giving the appearance on under-performing.
Most teams from BCS conferences do not end up at the bottom of the recruiting rankings, but when filtering for only BCS schools, we finally see the Irish.
|Under-performing BCS teams, 2006-2008 |
|Rank ||Team ||Performance relative |
Among major schools, only Miami underperformed its talent level by a bigger margin than the Irish. And the other schools on the list are not the company you want to keep: teams like Syracuse and Tennessee that have recently fired coaches for under performing, Florida State, where Bowden is biding his time while the program spirals down, and Washington -- well, no comment.
In fairness to the Irish, they have the heaviest underclassmen weighting on their talent of all of those teams. Furthermore, the data seems to over-weight underclassmen, so perhaps the news is not as bleak as it seems. Next season, we replace a 5th year class with 820 points with an incoming freshman class assigned 1564 points, so at least the trend is moving up.
|Rivals Points Assigned by Recruiting Class, bottom BCS teams |
|School||5th Years||Seniors||Juniors||Sophs||Frosh||Pct Underclassmen|
But the news isn't all bad for the Irish. Using the talent data to predict a Sagarin top 25 for next season puts the Irish in line for a very solid season, albeit not quite yet in the top 10.
|Predicted 2009 Sagarin Ratings based on Talent Scores |
The 2009 Irish will finally have talent in the Senior class on par with its Freshman; now it is up to the coaches and players to pull that talent up a notch and over-perform. If ND over-performs at the level of Ohio State, Oklahoma, or Florida, they could achieve a 90.6 Sagarin rating, good for fourth in the country. But if ND under-performs in 2009 by its average over the last 3 years, we could be looking at a Sagarin 75 and a 32nd nationwide ranking.
Fools Gold or Nuggets? A couple of other interesting things popped out from this analysis.
First of all, the number of five-star players recruited had no value whatsoever in predicting the strength of a team (if anything, it was slightly negative). What did show some correlation (a 43% R-squared), was the value that Rivals assigned to the total class.
Secondly (and oddly enough) the younger classes seemed to weigh heavier as a predictor of a team's success than the older ones (see Methodology for why I think this might be the case). The mix between classes was as follows:
|Predictive Impact by Class |
|5th-year ||Seniors ||Juniors ||Sophs ||Frosh |
|12% ||11% ||21% ||25% ||28% |
I'm certainly not surprised that the 5th year class shows a lower predictive value; there aren't necessarily a lot of players there, as most of the talented players forgo their final year of eligibility to pursue an NFL career. But the high number for freshman and sophomores did surprise and concern me. I would expect the Senior and Junior talent to contribute most to a team's performance rating, but it doesn't show up in the data. This could be a chicken-egg situation, as past performance may have led to both current performance and the ability to attract highly rated recruits (see Methodology (2)), meaning the causality is reversed: freshman come to the program because it is going to be good, rather than making significant contributions to the teams success. It could also be an oddity of the relatively limited time frame involved (2006-2008).
Methodology Notes. Let me admit outright that this analysis is fraught with potential potholes. Well, maybe it's not that bad, but it's not anything I'd want to stake the mortgage on, even these days. I did do a couple of things to make the data more accurate. I threw out Western Kentucky and the two Florida Airports (Florida International and Florida Atlantic). All three schools have only been in 1-A (I still can't say "the Bowl Subdivision") for only a couple of years and therefore I didn't have a complete data set for them. I also removed the three service academies, since I believe their recruiting process is significantly different than the rest of college football (if someone out there knows otherwise, please let me know).
But even with my attempts to clean up the data, it is still very squirrely. While Rivals does an excellent job of ranking recruits and recruiting classes, assigning a quantifiable number to them has to be a somewhat arbitrary process. Claiming one class is 10% better or worse than another is probably not the way they intended people to use their numbers, but it is the best I have available to me. I feel much more confident in Sagarin's computers, since the inputs are all hard facts (scores), but even those rankings only predict the right outcome about 70% or the time or so. And, many people may argue with some of the results, such as West Virginia's #1 ranking in 2007, for example.
Also note that this analysis doesn't take into account injuries, transfers, early NFL entries, or how many 5th years actually stick around. Nor is there any differentiation between positions, and I suspect that an otherwise average team with a standout QB is much better than one with a stellar DB. Additionally, depth at a position is not factored in the data, but I suspect there is still value in this. A team with four highly rated QBs will be better than a team with only one, but probably not to the degree that the numbers would indicate. If anyone has the time to pull all of this data together, I'd be glad to help with the analysis. We should be done by 2012 at the latest.
(1) Concerning the graph and the most under-performing schools, my suspicion is that Rivals focuses much more on the high profile players and as a result, the ratings are much more meaningful at the top end of the spectrum than the bottom. Recruits that end up at smaller programs show less differentiation in their numbers, and as a result, the lowest number that Rivals will assign any given recruiting class is around 60. That is why the scatterplot is skewed with a horizontal cluster around 65 and why the most under-performing programs were all programs that traditionally struggle with recruiting. It is no surprise that all of the bottom five programs had 2-4 of their recruiting classes receive "baseline" ratings. I suspect if there were more differentiation among the lower-starred recruits, we'd see better correlation in the data and a steeper line.
(2) The fact that the data seems to put more weight on the underclassmen may be a chicken-egg situation. In looking at the data, an excellent predictor of a team's performance in any given season is their performance last season. I suspect that this weighting towards underclassmen is to some degree validating the statement that "strong teams tend to remain strong teams, and that attracts high-profile athletes." So, instead of a highly-rated freshman class having significant impact on a team's performance, it is more likely that last year's strong performance led to this seasons' top recruiting class as well as contributing to this season's strong performance on the field.