Continuation of one metric prediction test discussion

Home for all your discussion of basketball statistical analysis.
Crow
Posts: 10533
Joined: Thu Apr 14, 2011 11:10 pm

Re: Continuation of one metric prediction test discussion

Post by Crow »

Prediction strategy and quality of metric test could overlap. But I think more want to focus on the latter than the former. It might be that 2 different tests would be better than one mixed motive contest.
ca1294
Posts: 7
Joined: Wed Jan 07, 2015 4:57 am

Re: Continuation of one metric prediction test discussion

Post by ca1294 »

Crow wrote:Prediction strategy and quality of metric test could overlap. But I think more want to focus on the latter than the former. It might be that 2 different tests would be better than one mixed motive contest.
I agree that it's two different tests, but my question is the same for both tests.

Assuming we use a method similar to the one described below:
1, on day x, anyone participating would submit player values for day x+1.
2, these values would then be used to predict the games of day x+1. since the point is to compare metrics, youd just use HCA=3 for all teams, and ignore rest day effects. actual realized minutes played from day x+1 would be used.
3, repeat for all remaining days in season.
4, lowest RMSE vs the actual final scores is the best metric.
What are the player values you submit for day x+1? Do you submit the season average up to day x?
Crow
Posts: 10533
Joined: Thu Apr 14, 2011 11:10 pm

Re: Continuation of one metric prediction test discussion

Post by Crow »

Depends on if you are testing a set metric or a metric combined with a predictive strategy.

Most things I see and hear suggest that heavier weighting for recent results is not that helpful, though it could help some for rookies and players with new coaches / contexts.

If you go game to game and allow metric changes that frequently, sone might game the metric to match that day's opponent. That is fine is you are testing prediction strategy but not desirable for testing one metrics intended to be used as a stable tool and looked at for cumulative ratings.

If you change metrics constantly then you have very short predictive tests and looking back and retrodicting ends up with the same in sample validity strike that folks disliked, right? So I'd try to settle on a way to do a long run stable metric test that at least some people found some value in. Just my take at the moment but I am weary from all the prep and debate and uncertainty.

I think what you want is mainly a prediction test with no restrictions. So maybe offer that, see who is interested, and let the metric findings come out of it as a side product instead of the main goal and let the other metric test happen separately or not.

I dunno how many want a super flexible prediction test though.
See what feedback comes in from others.
Post Reply