Recall the (revised) formula for RPI:

RPI = 0.23*WP + 0.23*OWP + 0.54*OOWPThe last two terms of this formula can be thought of as a measure of a team's "Strength of Schedule" expressed as the winning percentage of a team's opponents and their opponents. RPI arbitrarily stops evaluating this "Strength of Schedule" term at two levels. Does extending this to more levels (e.g., OOOWP) add any predictive value?

The answer turns out to be yes and no. With a formula of approximately:

RPI = 7*WP + 7*OWP + 7*OOWP + OOOWPwe get a performance of:

Predictor | % Correct | MOV Error |
---|---|---|

1-Bit | 62.6% | 14.17 |

RPI (unw,15+15+70) | 75.4% | 11.49 |

RPI (+oowp) | 74.6% | 11.36 |

This reduces the MOV Error but doesn't improve % Correct.

So extending the depth of RPI another step provides at least some value. This raises the natural question: is there value in extending it yet another step? ..and another step?

While we could certainly manually explore those possibilities by calculating OOOOWP, etc., it's perhaps better to cut to the chase and ask whether we can extend the depth of RPI infinitely, and see what predictive value that has. It may seem counter-intuitive, but it's possible to extend RPI to an "infinite" depth, but it requires a different computational approach.

Extending the depth infinitely sounds like an approach closer to the Colley Matrix.

ReplyDelete@Probable: Thanks for that pointer, I hadn't seen that. There are several different approaches to "infinite depth" ratings and I'll hit some of them after I'm done bashing RPI. I'll add the Colley Matrix to the list. At a quick glance it looks reasonable to implement.

ReplyDelete