Friday, May 27, 2011

The Offense-Defense Model & Probabilistic Matrix Model (PMM)

The first two MOV-based models will look at are similar:  they both calculate an "Offense" and a "Defense" for each team.  The "Offense" number represents a teams offensive capability and the "Defense" the defensive capability.  When Duke plays UNC, Duke's predicted score is Duke's "Offense" times UNC's "Defense."  Generally speaking, these numbers are calculated by initializing all teams to some baseline numbers (e.g., so that the expected score OxD = 65) and then iteratively adjusting the values so that they more closely match actual game outcomes.  If Arizona State University consistently scores few points, it's "Offense" value will drop (and at the same time it's opponents' "Defense" value will also drop).  After some number of iterations adjusting the numbers, the total error (across all games) will be minimized.

The Offense-Defense model is a version of this I first implemented for the 2010 March Madness Predictive Analytics Challenge.  It's a fairly simple model.  For each team, for each game, it predicts a score based upon the current Offense and Defense ratings.  It then determines the error between the prediction and the actual game result, and adjusts the appropriate Offense and Defense ratings to remove 75% of the error.  It then iterates this across all the games for a fixed number of iterations.  (This algorithm isn't guaranteed to converge, although in practice it usually does.)

Testing this algorithm with our usual methodology gives these results (for comparison, I show the best non-MOV predictor as well):

  Predictor    % Correct    MOV Error  
TrueSkill + iRPI72.9%11.01
Offense-Defense69.6%11.84

This performance (with some adjustments) was enough to win the 2010 Challenge, but seems disappointing in comparison to the TrueSkill + iRPI performance.  In particular, we might expect ratings based upon MOV to have lower MOV error rates, but that is not the case here.  I also implemented a version of the Dick Vitale methodology, where I calculated separate home and away ratings for all teams.  In this case, our predicted score is the home team's home Offense times the away team's away Defense (and vice versa).  Here's how that performs:

  Predictor    % Correct    MOV Error  
TrueSkill + iRPI72.9%11.01
Offense-Defense (home & away)68.2%12.26

Surprisingly (at least to me) this is significantly worse than the undifferentiated ratings.  Perhaps this is additional evidence that teams don't play differently at home than away; the home court advantage would then be due primarily to the referees -- a conclusion shared by Sports Illustrated.

The second model I tested is the "Probabilistic Matrix Model" (PMM).  This model is based upon the code Danny Tarlow released for his tournament predictor, which he discusses here.  This is similar in spirit to the Offense-Defense model, if much more sophisticated mathematically.  (You can tell this because the code has variables like s_hat_i in it.)  Testing PMM gives these results:

  Predictor    % Correct    MOV Error  
TrueSkill + iRPI72.9%11.01
Offense-Defense69.6%11.84
PMM71.7%11.23

The PMM does better than my naive Offense-Defense model (apparently there's something to all that math stuff) but still does not approach the performance of TrueSkill + iRPI.  I did not implement separate home & away ratings, but there's no reason to think they would provide improved performance.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.