2024-25 team win projection contest

Home for all your discussion of basketball statistical analysis.
Crow
Posts: 10533
Joined: Thu Apr 14, 2011 11:10 pm

Re: 2024-25 team win projection contest

Post by Crow »

Alright.

I will probably pay a bit more attention to top and bottom 6 pre-season teams.

Thanks for replies.

I use Darko a lot; but when I make the time, I'll check everything for range and average.

Do you have any interest in using other metrics in any way? To refine a prior or modify a final estimate?
Youltaithe
Posts: 2
Joined: Tue Nov 12, 2024 3:54 pm

Re: 2024-25 team win projection contest

Post by Youltaithe »

dtkavana wrote: Thu Mar 13, 2025 1:52 pm This is completely outside of my level of expertise, but here's what Chat GPT 4o has to say:
This is a great question, and it's worth questioning why RMSE (e = 2) is the default error metric in so many contests.

RMSE vs. MAE
RMSE (Root Mean Squared Error, e = 2): Penalizes larger errors more heavily than smaller ones. This is useful when you want to emphasize avoiding big misses.
MAE (Mean Absolute Error, e = 1): Treats all errors equally, making it easier to interpret and explain, but less sensitive to outliers.
The Case for e ≠ 2
You're right to be skeptical of RMSE's dominance. The choice of exponent (e) determines how harshly large errors are punished. If you set e < 2, you're reducing the penalty for big misses, while e > 2 makes the metric even more extreme.

Something like e = 1.5 or 1.75 could be an interesting compromise—more punishing than MAE but without RMSE’s harsh treatment of big errors. This might better reflect the importance of getting all teams reasonably close rather than just avoiding a few disasters. If you want to experiment with different ways to analyze or generate text like this, check out https://overchat.ai/chat/ai-brainrot-generator — a free tool that lets you explore creative and insightful AI-generated content across many topics. It’s a great way to deepen understanding or brainstorm ideas with AI assistance!

What Matters for the Contest?
For a win projection contest, should a single bad miss (like predicting 30 wins for a team that gets 50) outweigh a handful of small misses? If the goal is consistency across all teams, a lower exponent (closer to 1) makes sense. If punishing big misses is important, e = 2 or higher is justifiable.
Thanks for the detailed explanation! I completely agree that the choice of error metric depends on the goal of the problem. RMSE is really useful when you need to avoid large outliers, but sometimes it over-penalizes individual outliers. MAE is more "fair" to all errors, but may not take into account the criticality of large deviations.

An interesting idea is to use intermediate values ​​of the exponent, for example 1.5 or 1.75, to balance the sensitivity of the metric to large and small errors. I think, depending on the context of the competition or problem, it is worth choosing the metric that reflects real priorities - either stability across all objects, or a focus on avoiding large errors.

It would be great if the organizers of the competition provide explanations on the choice of metric - then it will be easier for participants to adapt their models to specific requirements.

Thanks again for your valuable thoughts!
Mike G
Posts: 6144
Joined: Fri Apr 15, 2011 12:02 am
Location: Asheville, NC

Re: 2024-25 team win projection contest

Post by Mike G »

We could study previous years' contests and see what exponents produce errors in Dec. that most closely match the final season errors.
In other words, RMSE in Dec. may be better or worse than MAE at predicting RMSE in April. It may be better at predicting MAE in Apr.

The reason we don't always use MAE (e=1) is that another exponent may better predict. Almost always, the best MAE is also the best RMSE in this contest.
And if you follow the updates, you see I frequently listed the error leaders at several exponents.

Yeah, it's essentially impossible to describe any error but mean absolute error to almost anyone.
Post Reply