Wednesday, March 4, 2015

So What About Me?

I've recently put up a few posts about the Kaggle competition including one about reasonable limits to performance in the contest.  So it's natural to wonder how I'm doing / have done in the Kaggle competition.

Fair enough.

Last year, my entry ended up finishing at 60th on the Kaggle leaderboard, with a score of 0.57.  At one point that was exactly at the median benchmark, but apparently the post-contest cleanup of DQed entries changed that slightly.  2014 wasn't a particularly good year for my predictor.   Here are the scores for the other seasons since 2009:

YearScore
20090.46
20100.53
20110.62
20120.52
20130.51

2014 was my worst year since 2011.  (2011 was the Chinese Year of the Upset, with a Final Four of a #3, #4, #8 and #11 seed.)  Ironically, I won the Machine Madness contest in 2011 because my strategy in that contest includes predicting some upsets; this led to correctly predicting Connecticut as the champion.

My predictor isn't intended specifically for the Tournament.  It's optimized for predicting Margin of Victory (MOV) for all NCAA games.  This includes the Tournament, but those games are such a small fraction of the overall set of games that they don't particularly influence the model.  There are some things I could do to (hypothetically) improve the performance of my predictor for the Kaggle competition.  For one thing, I could build a model that tries to predict win percentages directly, rather than translating from predicted MOV to win percentage.  Secondly, since my underlying model is a linear regression, I implicitly optimize RMSE.  I think it's likely that a model that optimizes on mean absolute error would do better1 but I haven't yet found a machine learning approach that can create a model optimized on mean absolute error with performance equaling linear regression.

I haven't put much effort into building a "Tournament optimized" predictor because (as I have pointed out previously) there is a large random element to the Tournament performance.  Any small gains I might make by building a Tournament-specific model would be swamped by the random fluctuations in the actual outcomes.



1 I say this because RMSE weights outliers more heavily.  Although there are a few matchups in the Tournament between teams with very different strengths (i.e., the 1-16 and 2-15 matchups in particular) in general you might suppose that there are fewer matchups of this sort than in the regular season, and that being slightly more wrong on these matchups won't hurt you much if you're also slightly more correct on the rest of the Tournament games.  That's just speculation on my part, though.

No comments:

Post a Comment