Another issue is that of course the most-regressed predictions will be favored at this point, and the contest will be artificially rearranged as the season progresses.
So I've got a de-regressed version of the team projections. Still using b-r.com's number, we can readily double the difference from 41:
BR*2-41 gives a mostly-performance-based projection; still with some regression toward .500
Code: Select all
west W east W
Uta 62 Mia 71
Por 51 Chi 59
Den 50 NYK 55
GSW 49 Phl 51
Min 44 Was 50
Sac 43 Cha 46
Mem 38 Tor 46
SAS 38 Mil 41
LAC 38 Brk 41
LAL 33 Atl 37
Phx 33 Ind 36
Dal 31 Cle 34
NOP 30 Bos 33
Hou 23 Det 29
OKC 14 Orl 21
https://www.basketball-reference.com/fr ... _prob.html
And relative to these, our contest (with a few dummy entries) would look like this:
Code: Select all
. avg err rmse avg err rmse
cali 8.7 10.3 dtka 9.4 11.4
21re 8.9 10.7 shad 9.4 11.0
bpmW 9.0 10.9 perW 9.4 11.3
avgA 9.0 10.6 Crow 9.6 11.1
WShr 9.1 10.8 emin 9.6 11.5
5.38 9.1 10.8 4141 9.7 12.2
trzu 9.2 11.2 vegas 9.8 11.3
lisp 9.3 11.1 2021 9.9 12.0
eWin 9.3 11.0
2021 is just last year's Wins * 82/72
21re is that number regressed halfway to .500
avgA is the avg of 8 actual APBR submissions plus my 3 extras.
4141 is 41 wins for every team
I'm inclined to phase-in this 'de-regressed' projection to get the dummies in their rightful place a bit sooner, and earlier props to those who got things right. Like today I'm going with 75% straight b-r.com and 25% the less mushy version; adding another 5% every day to the radical side?
Code: Select all
. avg err rmse .25 avg err rmse
21re 5.71 7.00 dtka 7.57 8.90
4141 6.07 7.63 shad 7.57 8.67
eWin 6.14 7.38 lisp 7.58 8.71
perW 6.22 7.49 vegas 7.65 8.77
bpmW 6.51 7.89 5.38 7.71 8.63
avgA 6.77 7.97 Crow 7.72 9.04
cali 6.98 8.05 emin 8.39 9.95
WShr 7.22 8.44 2021 8.48 10.13
trzu 7.42 9.02