Tuesday, November 13, 2012

More on Early Season Performance

Prior to my recent detour, I was looking at predicting early season performance.  To recap, experiments showed that predicting early season games using the previous season's data works fairly well for the first 800 or so games of the season.  However, "fairly well" in this case means an MOV error of around 12, which is better than predicting with no data, but not close  to the error of around 11 we get with our best model for the rest of the season.  The issue I want to look at now is whether we can improve that performance.

A reasonable hypothesis is that teams might "regress to the mean" from season to season.  That is, the good teams probably won't be as good the next season, and the bad teams probably won't be as bad.  This will be wrong for some teams -- there will be above-average teams that get even better, and below-average teams that get even worse -- but overall it might be a reasonable approach.

It isn't immediately clear, though, how to regress the prediction data for teams back to the mean.  For something like the RPI, we could calculate the average RPI for the previous season and push team RPIs back towards that number.  But for more complicated measures that may not be easy.  And even for the RPI, it isn't clear that this simplistic approach would be correct.  Because RPI depends upon the strength of your opponents, it might be that a team with an above-average RPI who played a lot of below-average RPI teams would actually increase its RPI because we would be pushing the RPIs of the its opponents up towards the mean.

A more promising (perhaps) approach is to regress the underlying game data rather than trying to regress the derived values like RPI.  So we can use the previous season's data, but in each game we'll first reduce the score of the winning team and raise the score of the losing team.  This will reduce the value of wins and the reduce the cost of losses, which should have the effect of pulling all teams back to the mean.

The table below shows the performance when scores were modified by 1%:

  Predictor    % Correct    MOV Error  
Early Season w/ History75.5%12.18
Early Season w/ Modified History 71.7%13.49

Clearly not an improvement, and also a much bigger effect than I had expected.  After all, 1% changes most scores by less than 1 point.  (Yes, my predictor is perfectly happy with an 81.7 to 42.3 game score :-)  So why does the predicted score change by enough to add 1+ points of error?

Looking at the model produced by the linear regression, this out-sized response seems to be caused by a few inputs with large coefficients.  For example, the home team's average MOV has a coefficient of about 3000 in the model.  So changes like this scoring tweak that affect MOV can have an outsized impact on the model's outputs.

With that understood, we can try dialing the tweak back by an order of magnitude and modify scores by 0.1%:

  Predictor    % Correct    MOV Error  
Early Season w/ History75.5%12.18
Early Season w/ Modified History (0.1%)  74.8%12.15

This does slightly improve our MOV error.  Some experimenting suggests that the 0.1% is about the best we can do with this approach.  The gains over just using the straight previous season history are minimal.

Some other possibilities suggest themselves, and I intend to look at them as time permits.




No comments:

Post a Comment

Note: Only a member of this blog may post a comment.