Taking the 2-3 games each team has played and their MOV -- not considering SOS -- we can fancifully project teams' win totals over 82 games.
These range from 2 wins for Phx up to 77/78 for the Raps and Clipps.
Using these as 25% of the projection, and our group avg. prediction for the other 75%, we get the following:
GSW 60 Tor 55
Hou 58 Cle 51
SAS 55 Was 49
LAC 55 Bos 46
Okl 50 Cha 46
Por 49 Mil 43
Den 47 Mia 41
Min 46 Det 40
Uta 46 Orl 39
Mem 43 Brk 33
NOP 39 Ind 31
Dal 29 Phl 29
LAL 27 NYK 26
Sac 26 Atl 25
Phx 21 Chi 22
Relative to these projections, our average errors:
Is there a spreadsheet with all the picks available? I want to run some of my own sets of rankings, and am trying to avoid duplicating the pick assembly process.
Here's another set of ranks, using a method suggested by Justin Kubatko some years back to regress small sample size SRS: Rest of Season SRS = (YTD_SRS*YTS_Games)/(YTS_Games+9.811).
It's still pretty noisy at this point (see Golden State for instance, who has a pretty terrible SRS), but gets meaningful pretty quickly.
Here's another justforfun stab at it.
Using a hybrid (MOV+SRS)/2 in place of straight MOV in the PythW% formula, plus current wins and losses;
then using that as 50% and our avg projection as the other 50%;
we project these records:
LAC 62 Tor 61
SAS 60 Cle 54
GSW 60 Orl 49
Por 58 Bos 49
Mem 51 Mia 48
Hou 50 Mil 47
Okl 46 Was 47
Uta 42 Ind 44
Den 40 Brk 40
Min 40 Cha 38
NOP 37 Det 34
LAL 25 Phl 28
Sac 21 Chi 27
Dal 20 Atl 22
Phx 16 NYK 20
In this scenario, the West averages 41.8 wins and the East 40.5
Relative to these projections, our avg errors are:
You might think the entry known as avg* would rank higher, since half of the projections are its very values.
After 3 to 5 games, teams' rank relative to whether they're looking worse or better than our avg. prediction:
Worse Better
Min -14 Por 16
Den -11 Orl 16
NYK -11 LAC 15
Dal -8 Brk 11
Phx -7 Tor 9
Hou -7 Mem 8
Phl -6 SAS 6
Uta -6 Det 4
NOP -5 Ind 4
Sac -5 Mil 2
GSW -5 Chi 1
Atl -5 LAL 1
Cle -3 Bos 1
Was -3 Cha 0
Mia -2
Okl -1
I slightly tweaked expected wins here by additionally regressing the impact of opposing YTD 3PT% in each's team's SRS to date. Benefits some teams like the Suns, hurts the Hornets, etc...
I also added an entrant called BetOnline (a sportsbook), with their posted Over/Unders as of October 15, and added a column showing each projected wins vs. BetOnline.
Which one is the Pinnacle line? I can add it. BetOnline may not be the best proxy for "Vegas" it occurs to me, as they often juice a lot of their lines rather than move them outright. So their Celtics line was 52.5 (-150), while other sportsbooks may have just moved the over/under down to 52.5.
Pinnacle predictions were submitted by shadow, after he balanced them to 1230 total wins. It's the first submission in this thread.
He gave a prediction of 52.38 for Bos, rather identical to the 52.5 you're saying. Maybe just the difference in additive vs multiplicative adjustment?
Pinnacle also had some of the lines heavily juiced. So I calculated what the implied line was for each team if the juice was even on both sides. As long as you pulled the lines from BetOnline close to the start of the regular season before the market closed it's probably fine, assuming you make the adjustment for any heavily juiced sides. BetOnline usually just clones Pinnacle for the most part once Pinnacle's lines are up. You just wouldn't want to use the $250 BetOnline openers as a barometer for "Vegas" since those lines aren't very sharp and don't represent a liquid market.
Each half win was worth about 7 cents by my math. I think Pinnacle was dealing 16 cent lines, so a typical line would be Over 41 -108 / Under 41 -108. If the line was instead Over 41 -115 / Under -101, then the true line is 41.5.
Approaches to team projections will vary some in the details but the general approach of most is: project player performance based on past (and maybe aging, convert performance to win impacts, project minutes and sum the products. Some folks regress to mean, blend metrics, make subjective adjustments.
There may be some useful discussion to come or in past contest threads. Not everyone wants to give away the details of their approach.
The scoring methods for the contest are a kind of prediction too based on recent in season team performance. They might be easier for what you seem to want to do. Try them, see how they do, adjust them. There is no one right answer.
Crow wrote:Approaches to team projections will vary some in the details but the general approach of most is: project player performance based on past (and maybe aging, convert performance to win impacts, project minutes and sum the products. Some folks regress to mean, blend metrics, make subjective adjustments.
There may be some useful discussion to come or in past contest threads. Not everyone wants to give away the details of their approach.
The scoring methods for the contest are a kind of prediction too based on recent in season team performance. They might be easier for what you seem to want to do. Try them, see how they do, adjust them. There is no one right answer.
Thank you Crow for your answer!!! Very interesting!!!