Tuesday, April 4, 2017

2017 Machine Madness Winner

The 2017 NCAA season ended with a win by UNC over Gonzaga in a title game marred by both poor shooting and excessive officiating.  I didn't have much time/enthusiasm for college basketball this year, but I did want to take a moment to congratulate the winner of the Machine Madness competition...  me.  :-)

My entry barely squeaked out a win over Erik Forseth's entry.  The Net Prophet entry to tournament pools takes the base prediction out of the prediction model and then picks some number of upsets (usually 8).  Two of those upsets this year were Oregon to the Final Four, and UNC over Gonzaga in the final game.  (Like most machine models, I had Gonzaga stronger than UNC at the start of the tournament.)  Over at the ESPN Tournament Challenge, the Net Prophet entry finished with 1500 points, good enough for 19,407th out of 13+ million brackets.  Not bad.

Lest my head swell too much, I have to point out that over in the Kaggle competition I finished in 416th place.  This year I didn't have much time to spend on the contest, so I ran the code for last year with some quick hacks in place to get it working and although it generated a legal entry, I had no faith in the results, so I'm not surprised to see the poor showing.  But my model has always done worse on Kaggle than in bracket competitions, so perhaps this was not a surprising result.

More notably, Monte McNair and the aforementioned Erik Forseth finished 3rd and 4th in the Kaggle contest.  That's tremendously good results for both of them, congratulations!

Thursday, March 9, 2017

2017 Machine Madness Competition

If anyone is still reading this blog, you've no doubt noticed the lack of posts this year. I've been busy with other things, and to be honest, the predictor has performed poorly for the last couple of seasons, which reduces my motivation.  Monte McNair, on the other hand, is still quite active and has just opened up the 2017 Machine Madness Competition.  If you're interested in joining, you can find the pool here.  Good Luck!

Friday, April 8, 2016

2016 Machine Madness Winner

I've been a little slow in getting around to this, but I want to congratulate "SDSU Fan" on winning the 2016 Machine Madness contest!  In real life, SDSU Fan is Peter Calhoun, a graduate student in Statistics at (no surprise) San Diego State University.  We had a very large pool of entrants this year (40!) so Peter deserves some congratulations for beating the masses.  Peter was trailing by a significant amount after the Round of 32, but strong performances in the later rounds (and especially the FF) resulted in big lead by the end.

Peter's model modified the Logistic Regression/Markov Chain (LRMC) approach proposed by Kvam and Sokol to use random forests.  Peter also finished in fiftieth on Kaggle -- a very strong performance all around.

Despite the large number of entries, nobody had Villanova winning it all.  I think that makes the Villanova win a "true upset".  I know in my model, Villanova played considerably better than predicted.

Speaking of my model, it follows a strategy in pool-based contests of picking some "likely" upsets to try to maximize the chance of winning.  (This is probably more important in a larger pool.)  This year, it picked Purdue to make it to the Championship Game.  Not only didn't that happen, Purdue was upset in the first round by #12 Little Rock.  I'm adding a special "Purdue Rule" to the Net Prophet model so that mistake is never again repeated.  :-)

Congratulations again to Peter on great performance!

Paper Reviews

These papers have been added to the paper archive available through the Papers link on the sidebar.  Links are also provided for direct download of the papers.

Dubbs, Alexander, "Statistics-Free Sports Prediction", arXiv.org
The author builds logistic regression models for MLB, NBA, NFL, and NHL games that use only the teams and scores.  This works best for basketball, and the author concludes that "in basketball, most statistics are subsumed by the scores of the games, whereas in baseball, football, and hockey, further study of game and player statistics is necessary to predict games as well as can be done."

COMMENT: I'm not sure the results of this paper say anything deeper than "Compared to the other major sports, NBA has a long season and the teams don't change much from year to year." 
Clay, Daniel, "Geospatial Determinants of Game Outcomes in NCAA Men’s Basketball," International journal of sport and society 02/2015; 4(4):71-81.
The authors build a logistic regression model for 1,648 NCAA Tournament games that include features for distance travel, time zones crossed, direction of travel, altitude and temperature.  They conclude "We found that traveling east reduces the odds of winning more than does traveling west, and this finding holds when controlling for strength of team, home region advantage and other covariates. Traveling longer distances (>150 miles) also has a dramatic negative effect on game outcomes..."
COMMENT: This paper shows that travel distance and direction has a statistically significant impact upon game results in the NCAA Tournament, but I want to add a few caveats to this conclusion.  First, it isn't clear that the authors understand and control for the fact that there are many more basketball programs (and arguably stronger basketball programs) on the East Coast than elsewhere in the nation.  For this reason, it's likely that teams moving west to play in the Tournament are stronger than teams moving east.  Since the authors don't control for the strength of teams, it's impossible to say whether the claimed impact of direction of travel means anything.  Second, the magnitude of these effects may not be huge.  I don't understand how the authors calculate their "Odds Ratio" but factors like strength of team are several orders of magnitude more significant in determining outcome.  Third, the authors are measuring strength of team by seed, which has several problems.  It's a very coarse measure, it doesn't distinguish between teams with the same seed, and it's often poorly correlated with the actual team strength (i.e., teams are commonly mis-seeded).  In my experience, many factors with low significance vanish when team strength is more accurately estimated.  I think distance and direction of travel probably do have an impact on Tournament games, but I suspect the true effect is smaller than this paper would indicate.
Clay, Daniel, "Player Rotation, On-court Performance and Game Outcomes in NCAA Men's Basketball", International Journal of Performance Analysis in Sport · August 2014

The authors look at the relationship between the size of rotation (how many players play at least 10 minutes in a game) and statistics such as rebounding, shooting percentage, etc.  The authors conclude that teams with deep rotation tend to rebound better, particularly on the offensive end. They also have more steals. By contrast, smaller rotation teams tend to shoot the ball better, both field goals and free throws, and they are more effective at taking care of the ball, resulting in fewer turnovers.  In general, a larger rotation improves the chance of winning.
COMMENT: There's quite a bit of interesting material in this paper, and I recommend reading it and drawing your own conclusions.  I have reservations about some of the conclusions in this paper because the authors have not controlled for number of possessions in the game for many of the statistics.  Since I'd expect (for example) that both the number of offensive rebounds and the depth of rotation to increase with more possessions, I'm not sure I immediately accept that teams with deeper rotations rebound better.  The authors do control for possessions in two of the statistics (offensive and defensive rating) and those conclusions are more convincing.  However, as far as I can tell the authors did nothing to control for overtime games, and that may also be affecting the results. 
From the specific viewpoint of predicting game outcomes, the authors don't make use of any kind of strength rating, so it isn't clear whether depth of rotation has any predictive value that wouldn't already be covered by a good strength metric.

Monday, March 28, 2016

Sorry About That!

I have to apologize to anyone who Stole My Entry over on Kaggle, because the Net Prophet predictor has made a hash of it this Tournament, and is mired low in the Leaderboard and well below the median entry.  A number of the upsets have been very improbable according to the Net Prophet predictor and it has suffered accordingly.

It's worth noting that some others have been suffering too:  Monte McNair has done better than Net Prophet but not by a whole lot.   Ken Massey entered for the first time and is very low on the Leaderboard (apparently because he gambled rather heavily on 2-15 matchups).   The most interesting story is ShiningMGF, who started poorly (perhaps because their first-round predictions are influenced by the Vegas lines?) but have been climbing steadily and are now in tenth place.  Top Ten finishes three years running is almost certainly a good indication that they know something the rest of us don't!

Over at the Machine Madness contest, Net Prophet isn't doing any better, being one of the many entries that predicted Kansas as the eventual champion.  It looks like "SDSU" has the win locked up already.  "Predict the Madness" is likely to finish second unless North Carolina loses the next game.  Beyond that it gets a little murky, but all the entries with UNC winning it all have an obvious advantage.

But regardless of who wins, it's been a great turnout for the contest (40 entries!) and I want to give my sincere thanks to everyone who entered.  It's really great to see so much interest and participation!


Tuesday, March 22, 2016

What Would a Perfect (Knowledge) Predictor Score in the Kaggle Competition?

It isn't possible to have a perfect predictor for NCAA Tournament games, because the outcome is probabilistic.  We can't know for sure who is going to win a game.  But we could conceivably have a predictor with perfect knowledge.  This predictor would know the true probability for every game.  That is, if Duke is 75% likely to beat Yale, the perfect knowledge predictor would provide that number.  (Because predicting the true probability results in the best score in the long run.) What would such a predictor score in the Kaggle Contest?

The Kaggle contest uses a log-loss scoring system.  In this system, a correct prediction is worth the log of the confidence of the prediction, and an incorrect prediction is worth one minus the log of the confidence of the prediction.  (And for the Kaggle contest the sign is then swapped so that smaller numbers are better.

Let's return to our example of Duke versus Yale.  Our perfect knowledge predictor predicts Duke over Yale with 0.75 confidence.  What would this predictor score in the long run?  (I.e., if Duke and Yale played thousands of times.)  Since the prediction is also the true probability that Duke will win, that number is given by the equation:

`0.75 * ln(0.75) + (1-0.75) * ln(1-0.75)`

that is, 75%  of the time Duke will win and in those cases the predictor will score ln(0.75), and 25% of the time Yale will win and the predictor will score ln(0.25).   This happens to come out to about -0.56 (or 0.56 in Kaggle terms).

So we see how to calculate the expected score of our perfect knowledge predictor given the true advantage.  If the favorite in all the Tournament games was 75% likely to win, then our perfect predictor would be expected to score 0.56.  But we don't know the true advantage in Tournament games, and they're all different advantages.  Is there some way we can estimate this?

One approach is to use the historical results.  We know how many games were upsets in past Tournaments, so we can use this to estimate the true advantage.  For example, we can look at all the historical 7 vs. 12 matchups and use the results to estimate the true advantage in those games.  (One problem with this approach is that in every Tournament, some teams are "mis-seeded".  If we judge upsets by seed numbers, this adds some error.)

Between this Wikipedia page and this ESPN page we can determine the win percentages for every possible first-round matchup.  There have been a reasonable number of these matchups (128 for each type of first-round matchup) so we can have at least a modicum of confidence that the historical win percentage is indicative of the true advantage:

SeedWin Pct
1 vs. 16100%
2 vs. 1594%
3 vs. 1484%
4 vs. 1380%
5 vs. 1264%
6 vs. 1164%
7 vs. 1061%
8 vs. 951%

Using the win percentage as the true advantage, we can then calculate what our perfect knowledge predictor would score in each type of match-up:

SeedWin PctScore
1 vs. 16100%0.00
2 vs. 1594%-0.22
3 vs. 1484%-0.45
4 vs. 1380%-0.50
5 vs. 1264%-0.65
6 vs. 1164%-0.65
7 vs. 1061%-0.67
8 vs. 951%-0.69

Since there are equal numbers of each of these games, the average performance of the predictor is just the average of these scores:  -0.48.

This analysis can be extended in a straightforward way to the later rounds of the tournament, but since there are fewer examples in each category it's hard to have much faith in some of those numbers.  But I would expect the later round games to make the perfect knowledge predictor's score worse, because more of those games are going to be close match-ups like the 8 vs. 9 case.

So 0.48 probably represents an optimistic lower bound for performance in the Kaggle competition.

UPDATE #1:

Here's an rough attempt to estimate the performance of the perfect predictor in the other rounds of the Tournament.

According to the Wikipedia page, there have been 52 upsets in the remaining rounds of the Tournament (a rate of about 2%).  If we treat all these games as having an average seed difference of 4 (which is a conservative estimate), then our log-loss score on these games would be about -0.66.  (Intuitively, this is as we would expect -- with most of the low seeds eliminated, games in the later rounds are going to be between teams that are more nearly equal in strength, so our log-loss score will be correspondingly worse.)  Since there are as many first round games as all the other rounds, the overall performance is just the average of -0.48 and -0.66:  0.57.

UPDATE #2:

Over in the Kaggle thread on this topic, Good Spellr pointed out that if you treat the first round games as independent events with a normal distribution, you can estimate the variance as well:

`variance = (1/n^2) sum_(i=1)^n p_i*(1 - p_i)*(Log[p_i/(1 - p_i)])^2`

which works out to a standard deviation of about 0.07. That means that after the first round of the tournament, the perfect prediction would fall in the range [0.34, 0.62] about 95% of the time.
.

Sunday, March 20, 2016

A Quick Update

I'm still in Brooklyn watching games (well, we're done watching now -- had a couple of fun games) and have been too busy to do more than minimum checking of email, but I found time to check on the Machine Madness contest.  I see that we have an amazing 40 contestants -- presumably most found us through the Kaggle Contest, but it's great to see the participation.  What's not so great is that the Net Prophet entry is doing poorly both here and at the Kaggle Contest, but that's a post for another day :-)