Wednesday, August 29, 2007

Preseason Analysis Part II - Correlation with Regular Season, More on Expected Wins

Previously on Preseason Analysis...
And now Part II of Preseason Analysis.

Preseason wins and losses are essentially meaningless, but maybe team performance, as measured by things like run efficiency and pass efficiency, still has some information whose accuracy carries over into the regular season. Intuitively, first-string players are still playing first-string players, so some true reflections of skill are bound to show up. So I decided to take preseason box scores from 1997-2006, and see what the stats show about teams' regular season performance. Please note that separating out first-stringer offensive stats only is too time-consuming (and possibly for something not that useful), so I'm just using total team stats for the game. Because of missing stats and the questionable usefulness of those stats, I've dropped punt returns, kick returns, and penalty first downs from my models.














Average preseason and regular season league averages
StatPreseasonRegular Season
R3.82394.0708
P5.59775.8866
SR0.0699010.068252
3C0.373170.3777
PY61.88654.266
IR0.0263190.029888
FR0.040690.031635














Correlation of preseason league average with regular season league average
StatCorr. coef.P-value
R-0.0230.94972
P0.536820.10961
SR-0.039850.91297
3C-0.164640.64945
PY0.729120.016728
IR0.0529210.88456
FR0.416080.23171


R=Rush, P=Pass, SR=Sack Rate, 3C=3rd Down Conv., PY=Penalty Yards, IR=Int. Rate, FR=Fum. Rate

Simply put, the preseason game is appreciably different than the regular season game. The first thing you notice is that the preseason favors defense. Yards per play is lower, sack rates are higher, fewer third downs are converted, and fumble rates are higher. On the other hand, interception rates are lower. More penalties are called. Perhaps coaches are more conservative on offense, saving most of their plays for the regular season. Maybe it's because coaches give almost all of their QBs playing time.

Meanwhile, pass yards per play, penalty yards per game, and fumble rate are the only three stats whose preseason averages correlate significantly with regular season averages, but only penalty yards has a significant p-value. The p-value is essentially the probability that a correlation coefficient that extreme could be achieved with entirely random inputs. Usually, the level for a stat to be considered significant is at 5% or less. Given the small sample size of postseason, we'll excuse the higher p-values of fumble rates and pass efficiency. Preseason average rushing efficiency, meanwhile, has essentially no correlation with regular season average rushing efficiency. If the preseason has little meaning on a league-wide level, then how does it fare on a team scope?



















Correlation of preseason stats with regular season stats, Unadj. VOLA
StatCorr. coef.P-value
RO0.181560.0012547
RD0.153450.0065266
PO0.285062.8993e-007
PD0.255024.8902e-006
SRM0.101180.073865
SRA0.242711.4125e-005
3CM0.201650.00033031
3CA0.175380.0018427
PY0.222667.0842e-005
IRG0.108790.054507
IRT 0.1180.036931
FRG0.0473890.40343
FRT0.111990.047738


O=Offense, D=Defense, M=Made, A=Allowed, G=Given, T=Taken

Offensive performance seems to correlate better overall between preseason and regular season than defensive performance, with turnover rates being the only exception. Surprisingly, almost all of the p-values are below 5%, with fumble rate given being the only significant exception. In other words, it's highly unlikely that random inputs could create similar correlation coefficients, so it's safe to assume that overall team preseason performance means something, just not much. If your team does well in the preseason, that's great, but it's hardly a guarantee of success. If your team does poorly, it's really not all that much to sweat about. Of course, this meets our expectations because second-string and third-string players get playing time they won't get in the regular season. If someone wants to take the time to sort through the box scores to figure out the efficiency stats for first stringers, they can be my guest. It's questionable how much the correlation coefficients would actually improve. Similar results can be seen with the correlation coefficients of preseason stats with regular season wins.



















Correlation of preseason stats with regular season wins, Unadj. VOLA
StatCorr. coef.P-value
RO0.034940.53798
RD0.0498080.37983
PO0.188240.00081724
PD0.189790.00073831
SRM0.0870230.12445
SRA0.144180.01065
3CM0.144280.010596
3CA0.219099.2999e-005
PY0.0544880.33663
IRG0.115540.04107
IRT0.0290160.60908
FRG-0.0236280.67711
FRT0.0939940.096927



Out of curiosity, I decided to create a linear regression model of regular season win totals using the following preseason stats: pass efficiency, sack rates, and third down conversion rates. With 1997-2006 stats, I tested on each year in 1998-2006, using all previous years as training data. On average, the predicted win totals have a correlation of 0.30593 with the actual win totals, not very high. The yearly average of mean absolute error was 3.1519 games, about twice what it is when using regular season stats. The average R2 was 0.67495, which took me by surprise a little. 67.945% of the variance is accounted for by this data, compared to 79% for the regular season stats? I was expecting 40-50% tops.

What's more interesting about the model, however, is its ability to predict which teams will improve/decline the following season. In a manner similar to what I did here, I looked at which teams exceeded or fell short of their predicted win totals by more than the mean absolute error. Teams that outperformed their projected win total based on preseason stats are predicted to decline the next year, and teams that underperformed their projected win total are predicted to improve. Because there is some positive correlation between regular season and preseason stats, I expected some of the success using regular season stats to carry over. What I found, however, was that the model with preseason stats is slightly more accurate than the model with regular season stats. This might be a result of noise created by the extra inputs in the regular season stats model (e.g. kick and punt returns).

1998-2005 (predicting 1999-2006)
Accuracy predicting risers: 74%
Accuracy predicting fallers: 61.818%

2002-2005 (predicting 2003-2006)
Accuracy predicting risers: 67.857%
Accuracy predicting fallers: 67.742%

Preseason seems to have some useful meaning then. On the other hand, we're talking about a 6-game range that of which a team has to fall outside for it to be a faller/riser. If the projection is 8 wins (average), a team could be bad (5 wins) or very good (11 wins) and still be within the average error. The method predicts about 6-7 risers and 6-7 fallers every year, so it's accurately predicting 8-9 teams to improve/decline each year. That's pretty good, I think. Without further ado, here are the projected risers and fallers for 2007:

Risers

  • Houston Texans (9.7623 expected wins vs. 6 actual wins)
  • Jacksonville Jaguars (12.371 vs. 8)
  • Oakland Raiders (10.005 vs. 2)
  • Dallas Cowboys (15.354 vs. 9)
  • New York Giants (11.979 vs. 8)
  • Tampa Bay Buccaneers (8.6278 vs. 4)

Based on the accuracy and what other projection models have shown, I'd pick Jacksonville, Oakland, Dallas, and Tampa Bay as the ones to actually improve.

Fallers

  • New York Jets (4.9253 expected wins vs. 10 actual wins)
  • Baltimore Ravens (8.2891 vs. 13)
  • Kansas City Chiefs (2.9868 vs. 9)
  • Chicago Bears (8.6448 vs. 13)
  • New Orleans Saints (5.9049 vs. 10)
  • San Francisco 49ers (4.1093 vs. 7)
  • Seattle Seahawks (2.0842 vs. 9)

Of these, I'd pick the Jets, Ravens, Chiefs, and Bears to decline. It really could go either way with the 49ers and Seahawks. That division is chaos.


After jumping through some hoops, I do seem to have found some relevance to the preseason. But it's nothing you couldn't find using regular season performance. As intuition would tell you, preseason performance is only slightly indicative of regular season performance.


2001 BAL@PHI box score was missing.

Read More......

Friday, August 24, 2007

Probabilities of Playoff Droughts and Some Bold Playoff Predictions

Arizona, Buffalo, Detroit, Houston, and Miami are the only 5 teams in the league not to have a postseason appearance in the 5 postseasons since the 2002 realignment. Cleveland, San Francisco, and Oakland have not appeared in the last 4 postseasons. "Probability is like gravity. You cannot fight gravity." (A chocolate chip cookie to the person that gets that reference.) Eventually, the unlikely scenarios will happen, and fortunes will reverse. It's the cyclical nature of the league. So what are the odds of a team going on a 5-year postseason drought?


To simplify the calculations, I'm making two assumptions. First, each team has an equal probability of reaching the postseason. Second, the results of each year are independent of each other. Obviously there's a wide array of influences that affect the actual probabilities, but let's keep things simple. As it turns out, the simplest solution is pretty accurate. So the probability of a team having a playoff drought for X seasons is simply .625X. The following table lists the probabilities and the expected number of teams experience such a drought at any given point in time.













Playoff drought of ≥X yearsProbabilityExpected number of teamsActual number of teamsYear (latest possible) of last playoff berth
162.5%20202005
239.063%12.5132004
324.414%7.812592003
415.259%4.882882002
59.5367%3.051852001
6*6.1527%1.907332000
7*3.9695%1.230531999
8*2.561%0.793911998


* Probabilities adjusted to account for there being 31 teams in the league 1999-2001. P(Missing playoffs 1999-2001) = 20/31 = 64.516%. Actual number of teams does not include Houston.

For the most part, the number of actual teams with such playoff droughts match up with the expected number. The only aberration is the number of teams with droughts of at least 4 years. That list of teams is: Arizona, Buffalo, Detroit, Miami, Cleveland, Houston, San Francisco, and Oakland. The probability of having a drought of 6 seasons is 5.9605%, so only 2 (1.9073 to be exact) teams are expected to not have made the playoffs in the 2002-7 timeframe. In other words, expect two or three teams in the ARI, BUF, HOU, MIA, DET set to make the playoffs. Guess what two of the teams are expected to improve in 2007 based on actual vs. expected wins in 2006? Miami and Detroit.

That's right. I'm calling it right now: Miami and Detroit are making the playoffs. And will fail immediately thereafter. Trent Green will have a lot to do with Miami's success. And with a tough schedule, I don't expect them to go more than 9-7. Jon Kitna put up some very good yards per attempt numbers in 2006 but was hampered by really poor pass protection. The defense was very bad, too, but teams with good passing offenses tend to make the playoffs. Detroit could wind up 8-8 with an average passing offense made good to very good with better pass protection and the addition of Calvin Johnson. Plus, the NFC North is not a particularly strong division. At the same time, the article about rising and falling teams projects Detroit to finish last in the division (and Miami to finish first in the AFC East). Honestly, calling Miami and Detroit goes against every instinct I have as a football aficionado, but there it is. Both teams, after runs of bad luck, should gets runs of good luck. Regression to the mean. No risk, no reward. If I'm wrong, I will post a picture of myself with JACKASS written across it.

So I'm predicting neither Arizona nor Buffalo will break their skids. Both teams have not done anything to get appreciably better, but what are the odds of their playoff droughts continuing into 2008? Buffalo's last playoff game was the Music City Miracle in the 1999 season. 2 seasons of 31 teams and 6 seasons of 32 teams. (20/31)2*(20/32)6 = 2.4809%. The chances of a team not making the playoffs from 2000-7 is 2.4809%. The expected number of teams with such a drought out of 31 teams is 0.76909. Based on my predictions for ARI and BUF, the actual number will be 2. The difference of 1.231 teams between expected and actual is not abnormal, looking at the table. Meanwhile, for the Cardinals, the probability of a team not making the playoffs from 1999-2007 is (20/31)3*(20/32)6 = 1.6006%. Given 31 teams, the expected number of teams to experience such a drought would be 0.49619. So it's not ridiculous to think Arizona's not quite ready for primetime based on those odds.

Buffalo had a very good pass defense in 2006, which carries a high probability of a playoff berth (just not as high as with pass offense), but they lost their shut-down corner, Nate Clements, to San Francisco. Since the probable loss in pass defense won't be offset by the possible gain in rush offense with Marshawn Lynch, I don't see Buffalo improving, let alone enough to make the playoffs. Arizona will improve in Leinart's second year, but given how bad the other parts of the team are, I think it will be another year or two before Leinart will drag that team into the playoffs. Showing how much I think of them, I forgot to include Houston in my analysis in the first draft of this article. Houston actually came very close to being predicted as a faller to begin with (4.7810 expected wins, 6 actual, 1.2190 difference, with barrier being 1.233). They're just not very good in any area, and the division should be very competitive with Indy and the Jags.

In case you were wondering, the Bengals did a phenomenal job of fighting gravity from 1991-2004. The probability of a team experiencing a playoff drought in that time frame was 0.33711%.

To recap, the odds show that one or two of the Fantastic Four (ARI, BUF, DET, MIA) should make the playoffs this year. Given the last 6-8 years to color our perceptions, none of them seem likely to make the playoffs. But surprises happen every year, and streaks have endings. That doesn't mean you should expect great things from these teams. It just means things will swing their way for once. Both Detroit and Miami underperformed their expected wins based on my Value Over League Average stats (as well as Pythagorean Wins and DVOA Expected Wins), which also indicates they will be better in 2007. Improvements in passing offenses will lead to better records for Miami and Detroit, which might be just enough to push them into the playoffs.

8-27-07: Took out paragraph about Green Bay and Chicago because it wasn't particularly relevant, and when I went back to check the projection, I found I had mixed up the order.

Read More......

Thursday, August 23, 2007

Accuracy of Vegas' Win Projections

In the post about the accuracy of PFP's 2006 win projections, I just got a comment linking to another interesting analysis of their accuracy as well as Vegas' win projections for 2005 and 2006. There's an Excel file with those numbers at the link.

The projections are within the 4.5-11.5 range, so they cover a pretty good range. It's about as wide as you'd want it. What surprised me is that in both years, exactly half the teams performed over and half the teams performed under their projection. Perhaps it is just a mathematical certainty that it will happen, but I would have expected random perturbations either way as in coin flips. But 2007 might buck that trend, so we'll see.


2005:
Mean abs. err.: 3.0469 games
Corr. coef. with actual wins: 0.22331
Largest error: 5.5 games (Jets, Texans, Eagles)
Smallest error: 0.5 games (Patriots, Chiefs, Vikings, 49ers)
No. of predictions within:
0-1 games: 5
1-2 games: 4
2-3 games: 5
3-4 games: 8
4-5 games: 7


2006:
Mean abs. err.: 2.3438 games
Corr. coef. with actual wins: 0.36088
Largest error: 5.5 games (Ravens)
Smallest error: 0.5 games (Bills, Texans, Colts, Chiefs)
No. of predictions within:
0-1 games: 8
1-2 games: 9
2-3 games: 6
3-4 games: 5
4-5 games: 3

In 2006, both PFP's projections and Vegas' projections were within 2 games of being exactly right for 17 out of 32 teams. But the one thing that strikes me about PFP's numbers is that the number of predictions curve is monotonically (i.e. always) decreasing as the error becomes larger (10,7,5,4,3). In the two years shown here, the curve for Vegas is more irregular, though 2006 is better than 2005. In 2005, more teams' win projection errors were within 3-4 games than any other margin (4-5 games followed closely). The PFP projections correctly predicted the Over 8 times and the Under 8 times, so using PFP to make all your over/under season win total bets, you would have been exactly 50% correct. Overall, I'd say that PFP's projections are better, and I'm really surprised to see that Vegas had a mean absolute error of 3 games in 2005. But the gap between the two is not that wide.

Since I was curious, here's how the Vegas regular season win total lines project the final standings by division:

AFC EAST
New England
New York Jets
Miami
Buffalo

AFC NORTH
Baltimore/Cincinnati (tie)
Pittsburgh
Cleveland

AFC SOUTH
Indianapolis
Jacksonville
Tennessee
Houston

AFC WEST
San Diego
Denver
Kansas City
Oakland

NFC EAST
Philidelphia
Dallas
New York Giants
Washington

NFC NORTH
Chicago
Green Bay
Minnesota
Detroit

NFC SOUTH
Carolina/New Orleans (tie)
Atlanta
Tampa Bay

NFC WEST
Seattle
Arizona/St. Louis/San Francisco (tie)

Read More......

Wednesday, August 22, 2007

Preseason Analysis Part I - Using Preseason Stats to Predict Regular Season Wins

As a Dolphins fan, the preseason has been less than encouraging despite the 2 wins. The offensive line's run blocking sucks to put it mildly, and the pass protection has been mediocre at best. Trent Green's accuracy is off, and the secondary hasn't been great either. But the sample sizes have been extremely small, which is the most important reason why I think preseason performance has little meaning. Many of the first-string QBs and RBs have less than 20 attempts. If one game is hard to predict because of the natural variance in performance, than one regular season game's worth of plays won't tell you a whole lot about a team. Then again, skill is skill and should show up to some extent in any game, regardless of its meaning. Depth is also important. So I've decided to examine the validity of my assumptions over the next few articles.

First up, can we use preseason efficiency stats to estimate regular season win totals?


For this experiment, I decided to use yards per rush and yards per pass stats only to keep things simple. Using unadjusted Value Over League Average to predict regular season win totals in 2006, a system with only Off. and Def. Pass and Rush Efficiency stats had a mean error of 1.541 games and an R2 of 0.6224. A system using all unadjusted VOLA inputs had a mean error of 1.233 games and R2 of 0.77047 in comparison. So even with the reduction in detail, the retrodictive system is still pretty good.

For the preseason efficiency stats, I wanted to stick with the performance of first stringers as much as possible. They're the ones that are going to be playing all season (hopefully). For offensive efficiency stats, this was pretty straightforward. I just took the yards per pass from the QB stats page on NFL.com and the yards per rush from the RB stats page. Clearly this relies on my selection of who's first string. Sometimes the true starter was injured or holding out. If you want to know exactly whose stats I chose, feel free to e-mail me. On defense, I wanted to use first half stats only, but those would have been non-trivial to obtain. In the interest of just getting a rough draft of the idea out there, I just used overall yards per rush/pass. To get a VOLA, I just used the average of the efficiencies as the "league average". Again in the interests of time, I kept that calculation very rudimentary. To predict regular season win totals, I simply pretend that the preseason VOLA stats are the regular season VOLA stats and plug them into the retrodictive system. In other words, we're assuming that the VOLA stats at the end of preseason will be the same as at the end of the regular season, though it's not clear at all that a strong correlation exists. In an upcoming article, I will look at the correlation between preseason efficiency and regular season efficiency.

Because the sample sizes for offensive stats were small, some teams ended up with extremely high or low VOLAs. This did not mesh well with the regression coefficients, which resulted in some teams being predicted to win less than zero games and some to win more than 16 games. Take the actual win totals with a grain of salt. But let's look at how it predicts the divisional standings.

2007 Predicted Final Standings Based on Preseason Stats through Week 3
AFC East


  1. New England, 6.9208 wins
  2. Buffalo, 4.3031
  3. New York Jets, 4
  4. Miami, -0.42247

New England tops the list thanks to an about average pass offense and pass defense efficiency. Ranks Buffalo too high when it should be last by most opinions.

AFC North

  1. Pittsburgh, 14.568 wins
  2. Cleveland, 9.0493
  3. Baltimore, 6.5127
  4. Cincinnati, 2.4849

Like Buffalo, Cleveland should probably be last, but the order is otherwise plausible.

AFC South

  1. Tennessee, 7.636 wins
  2. Houston, 7.2683
  3. Indianapolis, 6.3487
  4. Jacksonville, 6.161

Exactly the reverse order of what it should be. Indy's run defense is 26.255% above league average this preseason. It's pass offense is only 5.8% above average. Think that will last?

AFC West

  1. Oakland, 13.317 wins
  2. San Diego, 8.2215
  3. Kansas City, 5.0995
  4. Denver, 4.3164

Another case of one team being ranked too high instead of last. Oakland's run offense efficiency is 128.77% above league average. Lamont Jordan has had 8.4 ypc. Interesting that this and the rankings based on actual vs. expected wins in 2006 put Kansas City ahead of Denver, despite the near certainty that Larry Johnson's ACLs will spontaneously combust before Week 4.

NFC East

  1. Philidelphia, 23.678 wins
  2. Dallas, 11.756
  3. Washington, 9.7259
  4. New York Giants, 7.0968

Based on FO's prediction of Washington being on the rise, these rankings seem totally plausible.

NFC North

  1. Detroit, 10.856 wins
  2. Minnesota, 10.071
  3. Chicago, 6.7431
  4. Green Bay, 3.9523

Jon Kitna was right! They ARE going to win 10 games! Or not. Another division in reverse order of what it should be.

NFC South

  1. New Orleans, 9.145 wins
  2. Carolina, 8.435
  3. Atlanta, 8.2839
  4. Tampa Bay, -0.64007

0.8 ypc from Cadillac Williams would do that to Tampa Bay. The rankings in this division seem plausible.

NFC West

  1. Seattle, 17.836 wins
  2. San Francisco, 13.602
  3. St. Louis, 6.5272
  4. Arizona, 4.7789

Both Seattle and San Francisco benefit greatly from very strong offensive pass efficiency stats, but the ranking is plausible.

I'll refine and recalculate the system at the end of preseason and post the revised predictions. I'll also post predictions using pass and run offense efficiency stats based on entire team performance, rather than on one player at each position. But in addition to sample size issues, the opponent quality is much more varied from team to team, as they do not travel far for road games in the preseason. Expect a lot of noise in the projections because of it. In terms of regular season wins and losses, I don't think this system will be particularly accurate, but in terms of predicting division standings, preseason performance might yield interesting information.

Read More......

Monday, August 20, 2007

Apology for the Lack of Updates

I moved from South Florida to Washington state last week and started my new job today, so obviously I haven't had tons of time to update the site. I promise to have something by the end of the week, so keep checking in. Um... Excelsior?

Read More......

Friday, August 10, 2007

An Idea for Estimating Future Performance

As discussed previously, football predictions (mine and others') are largely based on the averages of past performances. The past performances themselves are above and below the average by various amounts, and the future performance is almost certainly going to be above and below that mean by a certain amount. But we don't know whether the performance will be above or below or by how much. That's the limitation of using averages. If we have all of the observations from the season so far, can we do a better job of estimating performance? In other words, we want to estimate the following:

P(Xw=xw | Xw-1=xw-1,Xw-2=xw-2,...,X1=x1)


I'm pretty sure the expected value of performance would still be the mean as it's the sum of all possible outcomes weighted by their probabilities, but there's still value in knowing the probability distribution of the possible outcomes. If Teams A and B averaged 4.0 yards a carry, you might think they're about as good. But if Team A has a 60% chance of averaging less than 4.0 yards a carry in a game, while Team B's chance is only 35%, then it's clear that Team B is more reliable. Team B would be the better choice to bet on. This would also be useful for fantasy football, I suspect. If there's a 5% chance, a running back is going to go for 220 yards in today's game, you might start him despite his expected performance (the mean) maybe being 95 yards. Depends on your risk-aversity, however.

My current idea as to the method of doing this would be a hidden Markov Model. Given a series of observations over time 1 through t-1, it estimates the probability of what you'll observe at time t. I'd break down performance into certain ranges (5.75-6 yards a pass attempt, 6-6.25, etc.) and use those as the evidence variables. I'm thinking the unobserved state variables would be the average performance broken down into ranges in a similar way.

Another page on hidden Markov models.

Before going to all the trouble of implementing it, I was curious to hear people's reactions and thoughts and suggestions. Plus, I'm moving next week, so I'll take the time to develop the idea while I get settled in.

Read More......

Tuesday, August 7, 2007

Do Stats Need to be Adjusted for Conference?

In this post and this post, I've found that the proportion of games won by the home team and those won by the "better" team (according to DVOA) is smaller for interconference games with a larger variance in the former proportion than for intraconference games. Games between divisional opponents have the highest of both proportions and the smallest variance in the former proportion. This led me to hypothesize that coaches are able to better adapt to teams they see more frequently, which makes inuitive sense, but I was also curious whether or not stats such as DVOA (or my VOLA stats) needed to be adjusted based on conference averages rather than league averages. Are NFC teams rewarded unfairly by DVOA for playing three quarters of its season in a conference full of weak teams? Ideally, opponent adjustments will filter out those effects, even if it's by league average. But perhaps 4 games against a stronger conference is not enough to filter out the effect. NFC Team X has played Y% better than league average when adjusted for opponent, but they were more likely to because of an easy schedule. Against AFC opponents, they'd be less likely to play at that level, even when adjusted for opponent strength. This effect might pop up more in games between average to below average opponents rather than when the NFC team is legitimately good. If the effect is real, then we'd see a greater proportion of interconference games being lost by the team with the higher DVOA when that team is in the NFC rather than the AFC.


I went back and looked at games from the entire history of DVOA (1996-2006). The DVOA totals used reflect the entire season's performance, including postseason. We should see the balance of power shift in favor of the AFC in 2000-6, given that only 1 NFC team ('02 Bucs) won the Super Bowl in that time span, and that team had a large and rare tactical advantage.

















Year% Was Better Team% Was Better Team
and At Home
Better Team Win %
Overall
AFCNFCAFCNFCAFCNFC
19960.566670.433330.283330.216670.676470.65385
19970.566670.433330.250.183330.558820.53846
19980.550.450.283330.233330.696970.7037
19990.650.350.316670.166670.743590.57143
20000.650.350.316670.166670.641030.7619
20010.50.50.250.250.80.8
20020.593750.406250.296880.203130.631580.57692
20030.609380.3750.281250.156250.666670.66667
20040.750.250.406250.156250.770830.5625
20050.656250.343750.328130.171880.619050.63636
20060.593750.406250.296880.203130.710530.5



The first table shows that the AFC might have been the better conference overall since at least 1996. For both conferences, the number of games in which their team was the better team is split evenly between home and away in almost every year, so neither conference's numbers should be unduly affected by home field advantage. For 6 out of 11 years, the AFC had a better win percentage when they were the better team than the NFC did when they were. In 2 of those seasons, the difference amounted to less than one game ('96 and '97). Looking at 2002-6, after the realignment, the AFC has had the better win percentage in 3 out of the 5 seasons. In 2003 and 2005, they were essentially even. In 2004, the NFC would have had to have won 3.33 more games in which they had the better team to match the AFC's win percentage. In 2006, they would have needed 5.4737 games. The predictive power of DVOA (where prediction = team with better total DVOA wins) was only 55.56% in 2006, compared to 66.67% in 2005 and 65% in 2004. Based on the table above, I'm guessing a large amount of the dropoff was because of interconference games, valuing NFC teams too highly given the gap between the conferences. That the year-to-year differences in better team winning percentage favor the AFC more strongly on average also indicates NFC teams being rewarded by a weak conference.

















YearBetter Team Win %
Home Better
Better Team Win %
Away Better
AFCNFCAFCNFC
19960.764710.769230.588240.53846
19970.60.818180.526320.33333
19980.823530.785710.56250.61538
19990.789470.70.70.45455
20000.842110.80.450.72727
20010.80.866670.80.73333
20020.789470.769230.473680.38462
20030.833330.90.523810.5
20040.807690.60.727270.5
20050.76190.818180.476190.45455
20060.736840.461540.684210.53846



The NFC teams see a bigger gap in better team win % for home and away games. The average difference is 18.517% for the AFC and 22.807% for the NFC. Small sample size has some effect for the games in which the NFC team is better, however. On average, the difference seems to be one or two games that could go either way.

So DVOA does seem to unfairly reward some NFC teams for playing in a weaker conference. Which teams are throwing off the system, though: the good teams being bumped up to very good or the mediocre teams being bumped up to good? In most years, the average gap in DVOA between teams in games in which the AFC was the better team and lost was larger than that for the NFC. And in most years, the average DVOA of the better teams that lost was higher for the AFC than the NFC. I would say, then, that the NFC teams that are mediocre to slightly above average (Rams, Falcons, Panthers) are unfairly rewarded more than good NFC teams (Eagles, Cowboys) for playing in a weaker conference.

Though DVOA is far more sophisticated than my methods, I believe I can apply the lessons learned here to my prediction model, specifically my VOLA stats. For intraconference games, I might start using Value Over Conference Average rather than VOLA. For interconference games, I could use Value Over Other Conference Average. So if Team A average 6.5 yards per play, it could be average in the AFC, but it might above average in the weaker NFC. Perhaps you could do a similar thing for DVOA, but the math of it is rather hazy to me right now.

Read More......

Monday, August 6, 2007

How Often Does the Better Team Win?

In the National Football Leagues, no win can be guaranteed. One play can have a large impact on any game. It's part luck, but it's part happenstance too. It takes skill to get an interception, but it's the happenstance of what play is being run, and thus where the players are, that largely dictate if the interception is returned for a touchdown or not. If the interception occurred in the red zone and was returned for a touchdown, that's at least a 10 point swing. So a play with a 2.96% probability of occurring (league average interception rate) has an inordinately large impact on the game. Brian Burke's blog had a very good piece on how much luck is involved with winning and concluded that half of winning games is luck (52.5% to be exact) so the better team is going to win around 74% of the time. I was curious to see how it worked out in reality and further validate my assertion that interconference games have more inherent variance and less predictability than intraconference games. To decide the better team, I simply used total DVOA from Football Outsiders (1996-2006). Please note that the DVOA stats are over the entire regular season and postseason, so they are retrodictive, not predictive. The predictive ability of total DVOA is not as good. If DVOA had a predictive accuracy of 70%, I wouldn't be working as hard on a prediction system.



Average result means how many more points the better team scores on average. Average margin of victory means how many more points the winning team scores on average. The averages are by year, so the total proportion of games won by the better team, etc. will vary slightly from the numbers listed here. Part of the original study on interconference study was to see how much year-to-year variance there was in the outcomes of those games.









InterconferenceInterdivision
Better Team Win %Avg. ResultAvg. Margin of VictoryBetter Team Win %Avg. ResultAvg. Margin of Victory
Mean, 1996-20010.680566.922212.2280.664465.69411.042
Std. Dev, 1996-20010.0798732.50310.997760.0477331.5790.75167
Mean, 2002-60.656.931312.2130.689586.539611.169
Std. Dev, 2002-60.0450151.15370.899530.0485241.49970.92161












IntradivisionAll Games
Better Team Win %Avg. ResultAvg. Margin of VictoryBetter Team Win %Avg. ResultAvg. Margin of Victory
Mean, 1996-20010.69066.444911.2320.680966.356911.42
Std. Dev, 1996-20010.0475671.11090.705740.026391.03430.24626
Mean, 2002-60.733337.072911.3560.696096.837511.5
Std. Dev, 2002-60.0502811.0890.530040.0275380.648520.31041



So with the 20/20 hindsight of each entire season, the better team has wins about 69% of games, close to the 74% reported in Brian's blog, which was based on 2002-6. When looking at 2002-6 intradivision games, he was pretty much dead on. 73.333% vs. 74%. Most of the discrepancy can be traced back to interconference games. The divisional realignment in 2002 reduced year-to-year variance in the percentage of games won by the better team, but it's also reduced the average percentage from 68% to 65%. It's interesting that the average margin of victory is larger in interconference games than in the other types, but I'm not sure what that means.

I have two ideas on possible reasons why fewer interconference games are won by the better team. First, maybe coaches have more problems adapting strategy to opponents they don't see as often. An interconference matchup occurs only once every four years now (before, some matchups were much more common than others). Coaches have only a week to prepare for games, so they can only learn so much about a team's strengths and weaknesses. Obviously, the more time they have to study opponents, the more they will learn about them. So every time the interconference matchup comes up, the coach probably has to throw out a good deal of what he learned the last time. With intradivision matchups, you see the opponent twice a year and can re-use knowledge gained from previous matchups. Second, maybe stats should be adjusted for conference quality in addition to specific opponent quality like in baseball. I'm not sure this would work, given that the rules in both conferences are the same, unlike in baseball. Given that 75% of the season is intraconference, though, perhaps it's slightly inaccurate to judge a team based on the whole league, rather than their specific conference, when trying to predict an intraconference game. I've toyed with implementing this idea and might pursue it sometime in the near future.

Read More......

Sunday, August 5, 2007

Another Look at the Importance of Offense and Defense in the Playoffs

In this post, I examined why defensive performance has a higher correlation with playoff success and determined that the root was the greater abundance of very good offenses among playoff teams. Thus, teams needed better defenses to make it through the playoffs. It's not a matter of one unit being more important than the other. It's a matter of balance. In this article, I redid the previous experiment using Football Outsiders' DVOA stats instead of my own VOLA stats. DVOA covers the same time period as the original experiment: 1996-2006. It's very important to point out that my VOLA statistics covered only the regular season, but the DVOA stats used here are based on regular season and postseason (where applicable) performance. This probably biases certain stats used here, such as the average DVOAs of Super Bowl winners, simply because the DVOAs used take into account that the teams performed well against good teams in the postseason, which isn't necessarily indicative of their regular season performance (see the 2006 Colts). The results, however, support what I said the last time.


For the following tables, O=Offense, D=Defense, R=Run, P=Pass, ST=Special Teams.

The correlation coefficients with the seeds are actually the correlation coefficients with 7-(Seed #), where the seed # is 7 for non-playoff teams. So the second column includes all teams (where the seed input is between 0 and 7), but the third column includes only playoff teams (where the seed input is between 1 and 7).












Corr To WinsCorr to Seed (all teams)Corr to Seed (playoff teams only)
O0.64780.495780.19407
RO0.46970.357970.23226
PO0.62410.479160.11531
D-0.5252-0.3899-0.27018
RD-0.3517-0.25361-0.202
PD-0.5066-0.37634-0.25634
ST0.26030.237870.18571



For this table, defensive DVOAs less than or equal to -X% are tallied in the ≥X% columns.











ALL TEAMS≥10 (Teams)≥10 (%)% Made Playoffs≥20 (Teams)≥20 (%)% Made Playoffs
O6819.82579.412216.122490.476
RO4513.1266.667113.20763.636
PO10129.44673.2674914.28685.714
D7722.44968.831174.956382.353
RD9728.2853.608216.122447.619
PD9527.69761.053349.912570.588
ST10.291550000














PLAYOFF TEAMSMedianMean
O6.657.475
RO1.551.1735
PO11.8513.123
D-7.15-6.903
RD-7.9-7.0205
PD-8.55-6.8765
ST0.950.97121














SB WINNERSMedianMean
O11.512.582
RO63.9273
PO17.820.936
D-15.1-14.155
RD-12-10.955
PD-14.9-17.373
ST2.22.3455



Teams with very good offenses are more common than teams with very good defenses, and teams with very good offenses are more likely to reach the postseason than teams with very good defenses. Having either still gives you a pretty good chance of making the playoffs. Having both makes it nearly certain. 28 out of 30 teams (93.33%) that had at least a 6% Off. DVOA and at least a -6% Def. DVOA made the playoffs. The only 2 teams not to were the 2002 Miami Dolphins, who blew a big lead late in the final game of the season against the Pats, and the 1999 Oakland Raiders, who were #3 in total DVOA but posted an 8-8 record.

Defense, however, has the higher correlation with playoff seeding. When nearly 70% of playoff games are won by the home team, seeding becomes very important. Then again, a higher seeding usually indicates a better team. Looking at the DVOAs, the home team is the better team in about 67% of postseason games, and the home team wins 69% of postseason games. The home teams win 70% of games in which they are better and 60.6% of games in which they are worse (though this number has dropped to 40% since the realignment in 2002).

On average, the Super Bowl winner is a balanced team. The offense DVOA is above 10%, and the defense DVOA is below -10%. And actually, the average defense DVOA comes out slightly higher. Special teams DVOA is positive as well. The worst offensive DVOA of a Super Bowl winner belongs to the 2000 Ravens (-7%), and the worse defensive DVOA belongs to the 2006 Colts (11%). The Ravens, however, at least had a good running game (8.3%). In 11 Super Bowls, the team with the better total DVOA won 9 times. The two exceptions were the 2001 Patriots and the 2006 Colts. The 2006 Colts, however, had the better weighted DVOA, in which the latest games are weighted more strongly. Seven champions had better offensive DVOA, and seven champions had better defensive DVOA. Six champions had better special teams DVOA. Four had better offensive and defensive DVOA. Only 2 champions were better in all three categories. All of the champions were better in at least one category (2001 Patriots had better ST DVOA than the Rams).

People often point out the Ravens and the Bucs to say that defense wins championships. While Baltimore won in 2000 with a -7% offensive DVOA, they had a better total DVOA than the Giants, who had a good defense (-8.2%) but a weak offense (4.4%) compared to the Ravens' monstrous -30% defensive DVOA. The Ravens also had better special teams (7.2% vs -4.4%). Similarly, the Bucs had better special teams and defense, which was historically good. The Raiders' offense, which was damned good (24.4%) couldn't hold up, but it's hard to quantify the impact of Barrett Robbins and Jon Gruden's vast insider knowledge. The first paragraph of the game summary section of Super Bowl XXXVII's Wikipedia entry indicates that the Raiders' offense was at a huge disadvantage because of Gruden's knowledge. So you can win with a mediocre offense (4 out of 11 had <1% DVOA), but it's reliant on the matchup (the Raiders, the 2003 Patriots' 0.4% Off. DVOA and the Panthers' -7.2% Off. DVOA) and you probably need a ridiculously good defense. Only two teams, however, have won with a mediocre defense (>0% DVOA). The 1998 Broncos had a really good offense led by John Elway and Terrell Davis (28% DVOA), and the 2006 Colts had a really good offense. The Broncos played a well-balanced Falcons unit (10.1% Off, -13.4% Def, 2.8% ST), too.

Teams like the '00 Ravens and '02 Bucs have struggled since their Super Bowl wins because defensive performance is less consistent on a yearly basis, and they did not have the capability on offense to make up for the performance loss. The Colts will struggle to duplicate their success in '06 as well without at least an average defense.

Read More......

Thursday, August 2, 2007

Accuracy of Pro Football Prospectus' 2006 Win Projections

I've been curious about exactly how accurate the mean win projections in Pro Football Prospectus are, so I threw together some stats.

PFP's 2006 Mean Win Projections


  • The mean absolute error was 2.2563 games.
  • 10 projections had an error of 1 game or less.
  • 7 projections had an error of more than 1 game and at most 2 games.
  • 5 projections had an error of more than 2 games and at most 3 games.
  • 4 projections had an error of more than 3 games and at most 4 games.
  • 3 projections had an error of more than 4 games and at most 5 games.
  • The biggest errors were for New Orleans (4.1 vs. 10) and Oakland (8.0 vs. 2).
  • The smallest errors were for Houston (6.1 vs. 6), Minnesota (5.9 vs. 6), Philidelphia (9.8 vs. 10) the Giants (7.8 vs. 8).
  • The correlation coefficient of the mean win projections with actual win totals was 0.3653.

Read More......