Coach RAPM
Re: Coach RAPM
To possibly clear up some confusion. This is standard ridge regression, not the modified version where you can choose priors that are different from zero. Thus, everyone, even coaches, has a prior of 0. If someone's estimate is far from zero there must have been lots of data to support this estimate
Re: Coach RAPM
To be clear: this regression is only measuring 1 small facet of a coach's role, and that only approximately.
This only says: given x players ON THE COURT, did they perform better with Coach A or Coach B?
That is only a very small part of the coaching equation, and this only has a few data points even for that estimate. Because of the variability of how players age and the smallness of the sample size, the results for even that estimate are approximate at best.
So don't overreact to what the results say.
This only says: given x players ON THE COURT, did they perform better with Coach A or Coach B?
That is only a very small part of the coaching equation, and this only has a few data points even for that estimate. Because of the variability of how players age and the smallness of the sample size, the results for even that estimate are approximate at best.
So don't overreact to what the results say.
Re: Coach RAPM
The regression that I use to get coach estimates also spits out player estimates. The Spurs have Diaw, Mills, Leonard, Splitter, Green, etc. (at some point I stopped looking for more) all way positive. The Thunder have Collison, Jackson, Ibaka etc. all way positive. Popovich having a +3.3 does not imply 'all non-named Spurs would have to have been simply terrible on average'schtevie wrote:Recalibrating beliefs about front office competence: Returning to the exceptional Gregg Popovich and the Spurs, again, if Pops is indeed the coach represented (+3.3) and we believe that Tim Duncan has averaged about a +8 since 2001 , and Ginobili +4, and Parker +3 (I'm just guessing about all these, but the point should be clear) this implies that all non-named Spurs would have to have been simply terrible on average. Given these assumptions, it is impossible to consider the Spurs to have been a well-run franchise. And a very similar story applies to the very well-respected Thunder, just substitute Brooks, Durant, Harden, and Westbrook, and you must have one of the worst supporting casts of all time.
For all its' theoretical shortcoming I find that the overall ranking mostly agrees with the public and my perception.
Except for the extreme cases, the general height of the estimates is something I can't really argue with, as that was found through empirical research. This research could have potentially told us that all estimates for total coach impact 'should be' not further away from 0 than, e.g., 2, giving us a range from -2 to +2 from worst to best. Or it could have told us the range should be -10 to +10. Or it could have told us the range should be 0. In the end it told the range should be what you see here
Scott Brooks' estimate does seem high and maybe this high of an estimate is not correct, but there are lots of coaches close to either +4 or -4, so I have little doubt that this is close to the 'real difference' between the best and the worst coach.
Terry Stotts' ranking surprised me a little, given how well POR is playing this season. Other than that:
- the COY candidates this year are probably going to be Vogel, Brooks, Stotts, Hornacek and maybe Popovich (given the age of his roster). Except for the aforementioned Stotts these are ranked #1, #2#, #8, #11
- Hornacek, ranked #2, was expected to go 20-62 in Phoenix, now is 30-21
- Brian Shaw's ranking (#3) might seem high at first, but he got 0 minutes from Gallinari. Iguodala is also not in Denver anymore. His best player is ... Ty Lawson? J.J. Hickson and Randy Foye are #2 and #3 in minutes and the team has a solid SRS of -1
- Westbrook, normally regarded as OKC's 2nd best player, has played less than half the games and OKC is still #1 in the league. Another superstar, Harden, left 18 months ago
- Except for Jason Kidd, none of the bottom 23 coaches have an NBA job right now. Kidd is currently 24-27 in the terrible east, with a team that was expected (by vegas) to go 53-29
- past COY of the year winners are ranked (out of 117): #16 (Karl), #11 (Popovich), #9 (Thibodeau), #1 (Brooks), #43 (Mike Brown, currently underwhelming in CLE), #105 (Byron Scott in '08, coached a team of Paul, West, T.Chandler, Stojakovic, all healthy, to 56 wins. Not too impressive in my eyes. Later had underwhelming years in CLE..), #24 (Sam Mitchell), #32 (Avery Johnson), #60 (D'Antoni, later had underwhelming years in NYK), #27 (Hubie Brown), #11 (Popovich), #65 (Carlisle), #61 (Larry Brown)
Re: Coach RAPM
I think this is a slight misinterpretation of the point I was making. Though there is no adding-up constraint imposed in the regressions, my expectation is that the weighted sum of player and coach contributions would approximate the observed team performance (similar to the players-only regressions). Stipulating this, in the player-only regressions, year in, year out, the sum of Duncan, Ginobili, and Parker's (and throw in David Robinson's for the first couple of years) contributions are greater than the Spurs' ORtg - DRtg. This implies that the contribution of "all others" is negative. Of course, if Pops' +3.3 in the new regression all comes out of these players' hides there should be no change in the rating of "all others", and I wasn't trying to suggest otherwise.J.E. wrote:The regression that I use to get coach estimates also spits out player estimates. The Spurs have Diaw, Mills, Leonard, Splitter, Green, etc. (at some point I stopped looking for more) all way positive. The Thunder have Collison, Jackson, Ibaka etc. all way positive. Popovich having a +3.3 does not imply 'all non-named Spurs would have to have been simply terrible on average'schtevie wrote:Recalibrating beliefs about front office competence: Returning to the exceptional Gregg Popovich and the Spurs, again, if Pops is indeed the coach represented (+3.3) and we believe that Tim Duncan has averaged about a +8 since 2001 , and Ginobili +4, and Parker +3 (I'm just guessing about all these, but the point should be clear) this implies that all non-named Spurs would have to have been simply terrible on average. Given these assumptions, it is impossible to consider the Spurs to have been a well-run franchise. And a very similar story applies to the very well-respected Thunder, just substitute Brooks, Durant, Harden, and Westbrook, and you must have one of the worst supporting casts of all time.
I am scratching my head a bit about the "...I can't really argue with, as that was found through empirical research" comment. This recalls an exchange I had here with Joe Sill after the release of his initial RAPM results. I made a very similar point then: the range (again, I was focusing on the elite players) was clearly "wrong". Intuitively, the best players were significantly better than suggested (I cannot recall what the extreme result was +4?) AND the results were simply not compatible with standard errors returned by multi-season APM. I don't want to put words in his mouth, but I recall his reply to have been along the lines that "this was found through empirical research", and, of course, the explanatory power of the regression was superior to APM.J.E. wrote:For all its' theoretical shortcoming I find that the overall ranking mostly agrees with the public and my perception.
Except for the extreme cases, the general height of the estimates is something I can't really argue with, as that was found through empirical research. This research could have potentially told us that all estimates for total coach impact 'should be' not further away from 0 than, e.g., 2, giving us a range from -2 to +2 from worst to best. Or it could have told us the range should be -10 to +10. Or it could have told us the range should be 0. In the end it told the range should be what you see here
But then along came Jeremias Engelmann, among others, to show that with reasoned and reasonable priors, the explanatory power of the regression can be increased, yielding results compatible with "informed intuition".
So, my point is that the same should be expected here. If your generally preferred approach for estimating player values creates results as featured at http://stats-for-the-nba.appspot.com/, why take a different approach for the contributions of the "sixth man"? Please explain.
I am supposing that if you were to do so, and you take a similar approach with coaches, entering the regression at zero, with subsequent priors being the previous year's rating, with regression to the mean, reflecting (theoretically) the perishability of coaching innovation, that you would get a far different range of revealed coaching ability.
And finally, let me ask again, as it still remains unclear to me: why was the range of results in your previous coach-included regression so much smaller?
There's no point in my repeating the market failure argument what very strongly suggests that this rather unlikely. But I have an open mind (and again, nobody trucks in or loves stories about NBA organizational shortcomings more than me). Should you ever get around to throwing coaches in annual regressions along the lines suggested above, I would be very interested in the results, and I'd be willing to bet that a +4 would not appear.J.E. wrote:Scott Brooks' estimate does seem high and maybe this high of an estimate is not correct, but there are lots of coaches close to either +4 or -4, so I have little doubt that this is close to the 'real difference' between the best and the worst coach.
Re: Coach RAPM
This would be true if there were no other adjustments. With the adjustment I quickly mentioned in this post, teams that are constantly good can have their weighed sum of contributions sum up to more than observed team performance. It'll become more clear once I explain the adjustment in more detail. Be patientschtevie wrote:Though there is no adding-up constraint imposed in the regressions, my expectation is that the weighted sum of player and coach contributions would approximate the observed team performance (similar to the players-only regressions).
That means absolutely nothing to me (and shouldn't mean anything to anyone)schtevie wrote: AND the results were simply not compatible with standard errors returned by multi-season APM.
I guarantee you the range would be almost the same. We would just see coaches that had more recent, but not so much past, success rated higherI am supposing that if you were to do so, and you take a similar approach with coaches, entering the regression at zero, with subsequent priors being the previous year's rating, with regression to the mean, reflecting (theoretically) the perishability of coaching innovation, that you would get a far different range of revealed coaching ability.
The penalty values are a little lower this time. I think the adjustments I built in allow for lower lambdas, but there's also a certain range of penalty values where OOS error is not going to differ signficantly and I chose to go with the low end this time. I also have 35% more data this time around. More data, usually, also allows for lower penalty values and also leads to higher absolute estimates (even if you didn't change the penalty values). I ran the numbers again with the 'old' penalty values and all the data I have now, and Brooks comes out at +5.5, with the worst coach coming out at -4. Not too big of a difference to what I've posted. To sum up, it's mostly the effect of having more data, and to some small degree because there are more accurate adjustments (age, effect of leading) built inwhy was the range of results in your previous coach-included regression so much smaller?
Re: Coach RAPM
I look forward to it.J.E. wrote:This would be true if there were no other adjustments. With the adjustment I quickly mentioned in this post, teams that are constantly good can have their weighed sum of contributions sum up to more than observed team performance. It'll become more clear once I explain the adjustment in more detail. Be patientschtevie wrote:Though there is no adding-up constraint imposed in the regressions, my expectation is that the weighted sum of player and coach contributions would approximate the observed team performance (similar to the players-only regressions).
To be clear, are you saying that the range of player +/- estimates (and corresponding standard errors) produced by multi-year APM provides no information about about the "true" range of player values? As in, it cannot be used to say that there is a certain degree of confidence that the top rated player has a value greater than X?J.E. wrote:That means absolutely nothing to me (and shouldn't mean anything to anyone)schtevie wrote: AND the results were simply not compatible with standard errors returned by multi-season APM.
I guess all I can say is "wow". If this penalty concept bears scrutinty, so much conventional wisdom is to be overturned (though I cannot believe, for example, that anyone will ever believe that Phil Jackson only had average talent to employ) and a whole lot of coaches can be expecting multi-million dollar raises (not to mention job offers).J.E. wrote:I guarantee you the range would be almost the same. We would just see coaches that had more recent, but not so much past, success rated higherI am supposing that if you were to do so, and you take a similar approach with coaches, entering the regression at zero, with subsequent priors being the previous year's rating, with regression to the mean, reflecting (theoretically) the perishability of coaching innovation, that you would get a far different range of revealed coaching ability.
The penalty values are a little lower this time. I think the adjustments I built in allow for lower lambdas, but there's also a certain range of penalty values where OOS error is not going to differ signficantly and I chose to go with the low end this time. I also have 35% more data this time around. More data, usually, also allows for lower penalty values and also leads to higher absolute estimates (even if you didn't change the penalty values). I ran the numbers again with the 'old' penalty values and all the data I have now, and Brooks comes out at +5.5, with the worst coach coming out at -4. Not too big of a difference to what I've posted. To sum up, it's mostly the effect of having more data, and to some small degree because there are more accurate adjustments (age, effect of leading) built inwhy was the range of results in your previous coach-included regression so much smaller?
Re: Coach RAPM
It is early in their careers but 3 of the 5 lowest rated current coaches are recent ex-assistants or players of Popovich. Has their connection to him been overvalued? Or are their teams still tanking or at least pre-lift-off?
Bryon Scott is the lowest rated coach with a long tenure. Don Nelson was next worst. Both were last employed by teams considered early adopters / practitioners of advanced analytics.
All of the worst 10 current coaches work for teams considered “analytic” teams. We’ll see where they are in a few years.
Brian Shaw’s high ranking is perhaps the biggest current coach surprise.
If you believe these ratings about half the league probably should be considering Hollins, Karl, the Van Gundys, Skiles (?) and down to A. Johnson, McMillan and Del Negro to replace their coaches with less than +1 ratings or at maybe just the ones below neutral or negative one. When they want a better chance to move up.
Brad Stevens with a stronger than expected rating. I guess without him and a neutral coach instead the Celtics’ offense would have been 28th instead of 25th.
By quick / rough check I see about 44% of all coaches played in the NBA and about 14% played PG there. Neither of these factor has hardly any overall correlation with the RAPM estimates... but the 6 best all played in the NBA 10+ years and 3 of the 4 best were PGs and the top 20 coaches were almost 50% more likely to have played in the NBA and played PG there than the bottom 20.
The set of current coaches are modestly worse on average than the full set.
The average coaching performance is better on offense and they are on average a modest net positive.
Bryon Scott is the lowest rated coach with a long tenure. Don Nelson was next worst. Both were last employed by teams considered early adopters / practitioners of advanced analytics.
All of the worst 10 current coaches work for teams considered “analytic” teams. We’ll see where they are in a few years.
Brian Shaw’s high ranking is perhaps the biggest current coach surprise.
If you believe these ratings about half the league probably should be considering Hollins, Karl, the Van Gundys, Skiles (?) and down to A. Johnson, McMillan and Del Negro to replace their coaches with less than +1 ratings or at maybe just the ones below neutral or negative one. When they want a better chance to move up.
Brad Stevens with a stronger than expected rating. I guess without him and a neutral coach instead the Celtics’ offense would have been 28th instead of 25th.
By quick / rough check I see about 44% of all coaches played in the NBA and about 14% played PG there. Neither of these factor has hardly any overall correlation with the RAPM estimates... but the 6 best all played in the NBA 10+ years and 3 of the 4 best were PGs and the top 20 coaches were almost 50% more likely to have played in the NBA and played PG there than the bottom 20.
The set of current coaches are modestly worse on average than the full set.
The average coaching performance is better on offense and they are on average a modest net positive.
Re: Coach RAPM
Slicing the data finer probably makes it less close to the truth (it is not precise) but theoretically I'd think one could run a model that had coaches, GMs and owners all specified instead of just coaches or a model that took coaching (or whole front office) impact down to the change shown on PGs, wings and bigs instead of all players lumped together. Might be interesting to see, especially for the longer tenured guys. Their impact may be more in one area than another and it would be good to try to understand where the impacts are. You may have a net positive impact but still have a negative somewhere to tend to or be negative but have a strength to recognize and maybe build on and around.
Re: Coach RAPM
If you compare your rankings to your previous 10 year coaching rankings, its much different.
http://web.archive.org/web/201201280628 ... om/coaches
Your old ranking had Thibs at top with a +2.2. In the new ones, Brooks is at top at +6.2. Thats a huge spread. In the old one, Thibs was the only guy +2 yet in the new one there is 19 of them. Is there a reason for such a wide spread? Other interesting results include Tim Floyd being the worst coach in your new ranking but in the old one he was like the 20th worst. How could his ranking change that much without coaching a game?
http://web.archive.org/web/201201280628 ... om/coaches
Your old ranking had Thibs at top with a +2.2. In the new ones, Brooks is at top at +6.2. Thats a huge spread. In the old one, Thibs was the only guy +2 yet in the new one there is 19 of them. Is there a reason for such a wide spread? Other interesting results include Tim Floyd being the worst coach in your new ranking but in the old one he was like the 20th worst. How could his ranking change that much without coaching a game?
-
- Posts: 237
- Joined: Sat Feb 16, 2013 11:56 am
Re: Coach RAPM
At first glance, the values do appear to be too large. This is not "going in with pre-conceived notions and being stubborn." This is more like ... the data is saying Scott Brooks could coach a league average team and make them into a strong contender. That seems wrong. No offense to Scott Brooks -- I don't buy the hate he usually gets, but +6.2 is huge.
Let's continue using Brooks as an example. He took over in 2010. The season before, they won 23 games; with him they won 50 games. That's largely why his RAPM is outstanding. He's only been on one team, and they took off when he started coaching him.
But should he get all the credit? It was a young team still developing and learning how to play. Coincidentally, Wayne Winston (I think) said during 2009 that he wouldn't take Durant on his team because his plus/minus was awful. Even his raw +/- was terrible: -9.1 according to b-ref. Durant heard these comments and brushed them off, but next season he was a +/- monster by adjusted and raw (+7.1) metrics. That's only one player, but you also had rookie Westbrook in 2009, a young Thabo, and this was pre-Harden/Ibaka on a mess of a roster with 20 guys including Robert Swift.
That the entire team grew quickly and played much better in 2010 is not entirely surprising in basketball because development can go in fits and starts, but using one giant model to cover every season sees Brooks taking the credit. The aging model helps, but it's smoothing the data and assuming there's incremental, consistent progress. Of course, players do not always behave this way. Sometimes a player is worse in his second season and breaks out in his third; sometimes he breaks out in his fourth season.
So the problem here is that we do not have enough information to determine Brooks' value as a coach (and his entire coaching staff, really.) He's only been with one team, and he's had only one replacement situation (taking over from PJ.) In the model here, he's the "sixth man" for every stint, but I don't think the number of stints and possessions is the same as it is for players as it is for coaches because coaches generally coach entire seasons in a row. With players, they at least get substituted and even miss games. Even if a coach misses a game and an assistant replaces him they're still using his system (presumably.)
I know you did some testing, but there's something off here. Saying Hornacek is a +4.2 coach because we've had 2/3rds of a season from him? Can you post the OOS testing? How did you do it?
Let's continue using Brooks as an example. He took over in 2010. The season before, they won 23 games; with him they won 50 games. That's largely why his RAPM is outstanding. He's only been on one team, and they took off when he started coaching him.
But should he get all the credit? It was a young team still developing and learning how to play. Coincidentally, Wayne Winston (I think) said during 2009 that he wouldn't take Durant on his team because his plus/minus was awful. Even his raw +/- was terrible: -9.1 according to b-ref. Durant heard these comments and brushed them off, but next season he was a +/- monster by adjusted and raw (+7.1) metrics. That's only one player, but you also had rookie Westbrook in 2009, a young Thabo, and this was pre-Harden/Ibaka on a mess of a roster with 20 guys including Robert Swift.
That the entire team grew quickly and played much better in 2010 is not entirely surprising in basketball because development can go in fits and starts, but using one giant model to cover every season sees Brooks taking the credit. The aging model helps, but it's smoothing the data and assuming there's incremental, consistent progress. Of course, players do not always behave this way. Sometimes a player is worse in his second season and breaks out in his third; sometimes he breaks out in his fourth season.
So the problem here is that we do not have enough information to determine Brooks' value as a coach (and his entire coaching staff, really.) He's only been with one team, and he's had only one replacement situation (taking over from PJ.) In the model here, he's the "sixth man" for every stint, but I don't think the number of stints and possessions is the same as it is for players as it is for coaches because coaches generally coach entire seasons in a row. With players, they at least get substituted and even miss games. Even if a coach misses a game and an assistant replaces him they're still using his system (presumably.)
I know you did some testing, but there's something off here. Saying Hornacek is a +4.2 coach because we've had 2/3rds of a season from him? Can you post the OOS testing? How did you do it?
Re: Coach RAPM
Digging a little into why Brooks' estimate is so high I did the following:
Remove coaches from the regression, compute age adjusted 14-year RAPM (which is essentially trying to answer the question 'which player would be the best if everybody was the same age'), and then
for every player that has played a certain amount of minutes for Brooks and for a different coach: create two distinct players: 'player_with_Brooks' and 'player_without_Brooks'
So, Durant turns into 'Durant_with_Brooks' and 'Durant_without_Brooks', Perkins turns into 'Perkins_with_Brooks', 'Perkins_without_Brooks' etc.
We can then look at each player's estimate when playing for Brooks, and when playing for somebody else
(Fisher has such an high estimate because he's playing extremely well 'for a 39-year-old')
The results say that only Perkins and Green played better for a different coach. Kendrick Perkins only played 313 minutes for Boston after tearing his MCL and PCL, then got shipped to OKC. For Sefolosha and Krstic it's not a big difference. All of the remaining players played a whole lot better (given their age) under Brooks, including Harden and Martin. Maynor fell of a cliff after OKC (although in limited minutes). Durant and Collison got the biggest boost
Remove coaches from the regression, compute age adjusted 14-year RAPM (which is essentially trying to answer the question 'which player would be the best if everybody was the same age'), and then
for every player that has played a certain amount of minutes for Brooks and for a different coach: create two distinct players: 'player_with_Brooks' and 'player_without_Brooks'
So, Durant turns into 'Durant_with_Brooks' and 'Durant_without_Brooks', Perkins turns into 'Perkins_with_Brooks', 'Perkins_without_Brooks' etc.
We can then look at each player's estimate when playing for Brooks, and when playing for somebody else
Code: Select all
Player w Br. w/o Br. Difference
Perkins 0.5 2.0 -1.5
Krstic 0.3 0.1 0.2
Sefolosha 2.7 2.3 0.4
Green -3.5 -2.4 -1.1
Collison 6.7 1.5 5.3
Harden 5.6 3.3 2.3
Maynor 2.2 -4.6 6.8
Fisher 7.0 3.6 3.4
Martin 3.5 0.2 3.3
Durant 5.8 -2.3 8.1
The results say that only Perkins and Green played better for a different coach. Kendrick Perkins only played 313 minutes for Boston after tearing his MCL and PCL, then got shipped to OKC. For Sefolosha and Krstic it's not a big difference. All of the remaining players played a whole lot better (given their age) under Brooks, including Harden and Martin. Maynor fell of a cliff after OKC (although in limited minutes). Durant and Collison got the biggest boost
Re: Coach RAPM
Maynor went off the cliff while in OKC. He had career best numbers at age 22-23. He was injured and played only 9 games in 2011-12. Last year, he never really got going and was shipped out in mid-Feb. At present, he does seem to be washed up at age 26.
It may be that all the players in the table should be checked for 'unique aging curve'. Durant certainly got ahead of the curve. Fisher is an anomaly, aging-wise -- best year in his last 5 ?
Harden flourished in the 'super sub' role; maybe Collison also, in his own way.
It may be that all the players in the table should be checked for 'unique aging curve'. Durant certainly got ahead of the curve. Fisher is an anomaly, aging-wise -- best year in his last 5 ?
Harden flourished in the 'super sub' role; maybe Collison also, in his own way.
Re: Coach RAPM
+/- wise Maynor still brought acceptable performance for OKC in '12-'13 with a +3 ON and a -7 NET rating (having a good NET on such a good team is hard), then went to Portland (-2.5 SRS) where he had a -7 ON and a -4 NET.Mike G wrote:Maynor went off the cliff while in OKC. He had career best numbers at age 22-23. He was injured and played only 9 games in 2011-12. Last year, he never really got going and was shipped out in mid-Feb. At present, he does seem to be washed up at age 26.
Then proceeds to have one of the worst +/- performances of probably any player ever (>300 minutes) in '13-'14 (-27 ON in WAS, -22 in PHI)
Though, he's probably not having a large impact on Brook's coach rating because of his rather low amount of non-Brooks minutes
Re: Coach RAPM
A few points/questions about the particulars of the Brooks breakdown and then some related, but more general, points.
So if I am getting the explanation straight, Brooks' first-place rating is entirely (?) attributed to the above-expected performances of Maynor and Durant, but let's look at four well-above average Brooks-performing players: Collison, Maynor, Fisher, and Durant. Taking each in turn:
Why is Nick Collsion so much the better player in the 14-year RAPM than in the annual xRAPM series (who maxes out at +2.5 in 2009-10)? As I understand the (relatively small) size of the aging and "effort" adjustments, where does the +6.7 "with Brooks" player come from?
As for Maynor, I'm with Mike G. Being bounced post-OKC to split-season stints with three, "transitional" franchises, I'm not sure what weight at all should attend such data (more on this point below).
Next is Fisher. In the entire history of the NBA, there have been only twenty instances of 38 year old guards having played, and but six for 39 year olds. Aside from the aforementioned fact that aging effects aren't "that" big, even for old dudes (approximately 0.5 per year), I'm not sure that the aging curve is sufficiently well-specified here.
Finally, there is Durant - the most important source of Brooks' stellar coaching rating. And the simple point is that KD's great leap forward, at least in terms of offense where the available stats are clear, isn't plausibly related to the efforts of a specific coach. Durant's progress related to better shot selection (in terms of zones) and he became a better shooter. Attributing these facts to "coaching" as embodied in the contributions of a single individual/franchise is a real stretch.
These specific concerns about Brooks' ratings aside, let me (re)iterate some general points.
(1) I don't know if one can overcome the problem of coaches getting spurious credit for "above-average" player development using a +/- approach.
Here's a regression result I would like to see. Instead of players with and without Brooks, how about Brooks pre and post 2009-10? And if the results should happen to be that Brooks post 2009-10 is a worse coach after only his first full season of experience, what would that imply? Aren't coaches expected to have "aging curves" too, with some positive returns for experience?
(2) Might it be a good idea to try to separate out the effects of "transitional" seasons? This is to say that it probably isn't fair to hold either the outgoing or incoming coach responsible for the unstable environment (hence results) attending being fired or hired mid-season. I am curious what the effect would be if all such instances were attributed to dummies.
(3) There is also the issue of coaches getting spurious credit for the efforts of other coaches. Call this the "Thibodeau Effect". When I look at the listed ratings, I see Jeff Van Gundy and Doc Rivers looking rather Thibodeau-like (but not so much with Don Chaney).
It would be particularly interesting to see coaches ratings for Doc (and Don) with and without Tom (Jeff's tenure perfectly overlapping with Tom).
(4) Finally, there is the flip side of the first point: accounting for spurious credit for above-average player age-related retrogression. Perhaps (not sure, but perhaps) this is the "Gregg Popovich Effect". I wonder how much of his rating is attributable to his aging superstars performing better than "expected". And here there is a conceptual issue involved.
Suppose that a large part of GP's coaching genius owes to his well-known penchant for resting his older players. Should he get credit for that? Surely as an innovator (if indeed he was) but that shouldn't really be incorporated in a coaching rating, for a couple of reasons. (And a similar argument would apply to Jeff Hornacek not misappropriating player credit for aggressively adopting a non-innovation of shooting a lot more 3s.)
So if I am getting the explanation straight, Brooks' first-place rating is entirely (?) attributed to the above-expected performances of Maynor and Durant, but let's look at four well-above average Brooks-performing players: Collison, Maynor, Fisher, and Durant. Taking each in turn:
Why is Nick Collsion so much the better player in the 14-year RAPM than in the annual xRAPM series (who maxes out at +2.5 in 2009-10)? As I understand the (relatively small) size of the aging and "effort" adjustments, where does the +6.7 "with Brooks" player come from?
As for Maynor, I'm with Mike G. Being bounced post-OKC to split-season stints with three, "transitional" franchises, I'm not sure what weight at all should attend such data (more on this point below).
Next is Fisher. In the entire history of the NBA, there have been only twenty instances of 38 year old guards having played, and but six for 39 year olds. Aside from the aforementioned fact that aging effects aren't "that" big, even for old dudes (approximately 0.5 per year), I'm not sure that the aging curve is sufficiently well-specified here.
Finally, there is Durant - the most important source of Brooks' stellar coaching rating. And the simple point is that KD's great leap forward, at least in terms of offense where the available stats are clear, isn't plausibly related to the efforts of a specific coach. Durant's progress related to better shot selection (in terms of zones) and he became a better shooter. Attributing these facts to "coaching" as embodied in the contributions of a single individual/franchise is a real stretch.
These specific concerns about Brooks' ratings aside, let me (re)iterate some general points.
(1) I don't know if one can overcome the problem of coaches getting spurious credit for "above-average" player development using a +/- approach.
Here's a regression result I would like to see. Instead of players with and without Brooks, how about Brooks pre and post 2009-10? And if the results should happen to be that Brooks post 2009-10 is a worse coach after only his first full season of experience, what would that imply? Aren't coaches expected to have "aging curves" too, with some positive returns for experience?
(2) Might it be a good idea to try to separate out the effects of "transitional" seasons? This is to say that it probably isn't fair to hold either the outgoing or incoming coach responsible for the unstable environment (hence results) attending being fired or hired mid-season. I am curious what the effect would be if all such instances were attributed to dummies.
(3) There is also the issue of coaches getting spurious credit for the efforts of other coaches. Call this the "Thibodeau Effect". When I look at the listed ratings, I see Jeff Van Gundy and Doc Rivers looking rather Thibodeau-like (but not so much with Don Chaney).
It would be particularly interesting to see coaches ratings for Doc (and Don) with and without Tom (Jeff's tenure perfectly overlapping with Tom).
(4) Finally, there is the flip side of the first point: accounting for spurious credit for above-average player age-related retrogression. Perhaps (not sure, but perhaps) this is the "Gregg Popovich Effect". I wonder how much of his rating is attributable to his aging superstars performing better than "expected". And here there is a conceptual issue involved.
Suppose that a large part of GP's coaching genius owes to his well-known penchant for resting his older players. Should he get credit for that? Surely as an innovator (if indeed he was) but that shouldn't really be incorporated in a coaching rating, for a couple of reasons. (And a similar argument would apply to Jeff Hornacek not misappropriating player credit for aggressively adopting a non-innovation of shooting a lot more 3s.)
Re: Coach RAPM
J.E. wrote:Digging a little into why Brooks' estimate is so high I did the following:
Remove coaches from the regression, compute age adjusted 14-year RAPM (which is essentially trying to answer the question 'which player would be the best if everybody was the same age'), and then
for every player that has played a certain amount of minutes for Brooks and for a different coach: create two distinct players: 'player_with_Brooks' and 'player_without_Brooks'
So, Durant turns into 'Durant_with_Brooks' and 'Durant_without_Brooks', Perkins turns into 'Perkins_with_Brooks', 'Perkins_without_Brooks' etc.
We can then look at each player's estimate when playing for Brooks, and when playing for somebody else
(Fisher has such an high estimate because he's playing extremely well 'for a 39-year-old')Code: Select all
Player w Br. w/o Br. Difference Perkins 0.5 2.0 -1.5 Krstic 0.3 0.1 0.2 Sefolosha 2.7 2.3 0.4 Green -3.5 -2.4 -1.1 Collison 6.7 1.5 5.3 Harden 5.6 3.3 2.3 Maynor 2.2 -4.6 6.8 Fisher 7.0 3.6 3.4 Martin 3.5 0.2 3.3 Durant 5.8 -2.3 8.1
The results say that only Perkins and Green played better for a different coach. Kendrick Perkins only played 313 minutes for Boston after tearing his MCL and PCL, then got shipped to OKC. For Sefolosha and Krstic it's not a big difference. All of the remaining players played a whole lot better (given their age) under Brooks, including Harden and Martin. Maynor fell of a cliff after OKC (although in limited minutes). Durant and Collison got the biggest boost
Just noting that if this were done for every coach and every player and then the results were compiled for each position as sums for each position (or just PG, wings and bigs) that would be what I suggested above. Not expecting it will be done though. Really with existing public full data it can be done fairly well for specific coaches people are interested in with exceptions for players traded mid-season or coaches changed mid=season if one wants to put in the work to compile.