I've read a lot of the studies and debates on the value of a defensive rebound but I'm still not convinced anyone has a firm grip on the value.
Has anyone tried to create a controlled experiment where player substitutions and the results were actually compared.
For example, suppose a team had 2 SFs and one grabs 10 rebounds per 36 minutes and the other 5 rebounds per 36 minutes. If we looked how well the team rebounded with the exact same line up except for each of those players, we might have a clue as to how much those 5 extra boards actually added. I'm sure controlling for everything is more complex that than, but if we could find 100 cases like that, it might help a lot.
Value of a Rebound
Re: Value of a Rebound
I don't know that you can find a SF who has averaged 10 reb/36 in recent times. Such a player would be a PF.
http://bkref.com/tiny/XB9fm
But in a general sense, any F that replaces another F, and who rebounds much better, is likely to be accompanied in the lineup by a C or F that rebounds less well.
You can look at pages like this -- http://www.82games.com/1011/1011BOS2.HTM
Find lineups that are the same except for one player, note the team Reb%, look up the player Reb/36.
http://bkref.com/tiny/XB9fm
But in a general sense, any F that replaces another F, and who rebounds much better, is likely to be accompanied in the lineup by a C or F that rebounds less well.
You can look at pages like this -- http://www.82games.com/1011/1011BOS2.HTM
Find lineups that are the same except for one player, note the team Reb%, look up the player Reb/36.
Re: Value of a Rebound
Looking at all minutes for 4 seasons for the entire league, Jerry estimated an adjusted defensive rebounding impact rate of more than +1 or less than -1 per 100 possessions in just 20 cases of 537, or less than 4%. That suggests pretty strongly that overall, across many lineups defensive rebounding strength is a team strength and that individual defensive rebounding differences between 2 players are not simple, reliable additions to actual team performance.
Particular cases may be worth checking, though it is hard to fully control or simply interpret as all other things are not equal on the surface. Even when comparing lineups which are otherwise the same except for one player change the opposition quality and game context may be different.
The pattern in one case is not followed in another.
For Chicago the very biggest minute lineup with the 4 other starters and Boozer defensive rebounded about 1% worse than the same lineup with Gibson (3rd biggest minutes) even though Boozer was estimated with about a 1 defensive rebound in 100 possessions edge over Gibson. So adjusted rebounding rate did not call this case accurately. But... simply using the difference in their individual rebounding rates, one might have expected the lineup with Boozer to do far better (not worse) as his average individual rebounding rate was nearly 7% points higher than Gibson's. So adjusted rebounding rate was not a great guide but it was better in this case than simply using the difference in their individual rebounding rates which some partisan in the defensive rebounding debate assume to be appropriate.
In Denver, the biggest minute lineup with the 4 other starters and Martin had a 9 %point advantage over the same lineup with Harrington. Using adjusted defensive rebounding impact rate alone, it suggest Harrington's lineup should have done a fraction of a rebound better. Though this does not square with conventional subjective evaluation. In this case the difference in their individual rebounding rates was about 3.5 %points. So it was closer but not really close.
With the Bucks, the most used lineup with MbahMoute was out-rebounded by the same lineup with Gooden by 12 %points. In this case the difference in their individual rebounding rates was about 8.5 %points. So it was pretty close and made the right call. Using adjusted adjusted defensive rebounding impact rate alone, it suggest MbahMoute's lineup should have done a fraction of a rebound better. Though this does not square with conventional subjective evaluation or the actual outcome.
With the Pacers, the most used lineup with McRoberts was out-rebounded by the same lineup with Hansbrough by 7 %points. In this case the difference in their individual rebounding rates was about 3 %points. So it made the right call but underestimated the scale of the change. Using adjusted adjusted defensive rebounding impact rate alone, it suggest Hansbrough's lineup should have done a one rebound in 100 possessions better. Again it missed by more.
With the Kings, the 3rd most used lineup with Casspi was outrebounded by the same lineup with Greene by 4.5 %points. In this case the difference in their individual rebounding rates was about 6 %points, with Casspi though being higher. So it was the wrong call and by a huge amount. Using adjusted defensive rebounding impact rate alone, it suggest Greene's lineup should have done about 1 rebound per 100 better. So this time adjusted defensive rebounding impact rate called it right, far closer than the other method, but not really close.
5 cases: adjusted rebounding rate was better twice and the difference in their individual rebounding rates was better three times. The average absolute difference was smaller by about 1 rebound % for the difference in their individual rebounding rates.
The difference in the average defensive rebound rate could be found for each of opponent sets (and even strength of their opponents adjusted) to see how much affect difference in opponents played on these numbers. I won't do that here and now (fairly time-consuming) but it could be done and should be done before any strong conclusion. Obviously more than 5 test cases (and done fully) would help too. But this much does suggest that the actual outcomes for particular lineups may not be neatly or reliably picked by either method. Of course one would expected some bounciness in the data when most of the sample sizes are only a few hundred minutes.
Using the average of the two methods, I think would have done slightly better on an absolute difference basis than difference in their individual rebounding rates alone. Probably better to use a blend over either one alone. Where they agree, you probably can predict fairly well. Where they disagree, the average will probably do better.
The adjusted rebounding impact set seems like the best guide for overall rebounding impact for a player across the entire set of lineups but so far it does not seem very good for projecting change of one player with the same 4 other guys. There is general regression to the team defensive rebounding mean but not necessarily for a particular lineup.
Difference in their individual rebounding rates was slightly better in this mini-test (as far as it was run) but not simply and steadily reliable either. May be too much to think that this can be predicted reliably for particular lineups, even the very biggest ones.
Using either method or a blend probably would do pretty well at season aggregate data level. Want more defensive rebounding? Put players with better ratings out there. Which would do better at this level- adjusted rebounding difference or raw rebounding difference? I dunno. Would need more time & incentive to study it fully. Unlikely for me at this moment.
But in the 5 cases, the average net actual performance of the lineup with the better rebounder of the pair exceeded the expected by difference in raw defensive rebounding rates by a bit more than 1 %point. Not diminishing returns on average, in fact on average increasing... in the very small test. The team actual defensive rebounding % exceeded what was expected by net difference in adjusted defensive rebounding by more than 4 points.
Maybe these aren't typical cases. (Might be different with guards.) They seemed like decent test cases to start with though. Whether they are or are not an unusual draw, I wonder if adjusted rebounding has been regressed too much to the mean. Maybe the adjusted rebounding estimates are proper from a talent level or prediction standpoint. Time consuming and messy and demanding of care and caution to push beyond the simple statements and arguments. Probably worthwhile if one does it (especially someone working for a team) then steps back and makes some sensible decisions about how to use the understanding / ambiguity. And adjusts over time based on the performance results, more data, more analysis and perhaps more understanding.
Particular cases may be worth checking, though it is hard to fully control or simply interpret as all other things are not equal on the surface. Even when comparing lineups which are otherwise the same except for one player change the opposition quality and game context may be different.
The pattern in one case is not followed in another.
For Chicago the very biggest minute lineup with the 4 other starters and Boozer defensive rebounded about 1% worse than the same lineup with Gibson (3rd biggest minutes) even though Boozer was estimated with about a 1 defensive rebound in 100 possessions edge over Gibson. So adjusted rebounding rate did not call this case accurately. But... simply using the difference in their individual rebounding rates, one might have expected the lineup with Boozer to do far better (not worse) as his average individual rebounding rate was nearly 7% points higher than Gibson's. So adjusted rebounding rate was not a great guide but it was better in this case than simply using the difference in their individual rebounding rates which some partisan in the defensive rebounding debate assume to be appropriate.
In Denver, the biggest minute lineup with the 4 other starters and Martin had a 9 %point advantage over the same lineup with Harrington. Using adjusted defensive rebounding impact rate alone, it suggest Harrington's lineup should have done a fraction of a rebound better. Though this does not square with conventional subjective evaluation. In this case the difference in their individual rebounding rates was about 3.5 %points. So it was closer but not really close.
With the Bucks, the most used lineup with MbahMoute was out-rebounded by the same lineup with Gooden by 12 %points. In this case the difference in their individual rebounding rates was about 8.5 %points. So it was pretty close and made the right call. Using adjusted adjusted defensive rebounding impact rate alone, it suggest MbahMoute's lineup should have done a fraction of a rebound better. Though this does not square with conventional subjective evaluation or the actual outcome.
With the Pacers, the most used lineup with McRoberts was out-rebounded by the same lineup with Hansbrough by 7 %points. In this case the difference in their individual rebounding rates was about 3 %points. So it made the right call but underestimated the scale of the change. Using adjusted adjusted defensive rebounding impact rate alone, it suggest Hansbrough's lineup should have done a one rebound in 100 possessions better. Again it missed by more.
With the Kings, the 3rd most used lineup with Casspi was outrebounded by the same lineup with Greene by 4.5 %points. In this case the difference in their individual rebounding rates was about 6 %points, with Casspi though being higher. So it was the wrong call and by a huge amount. Using adjusted defensive rebounding impact rate alone, it suggest Greene's lineup should have done about 1 rebound per 100 better. So this time adjusted defensive rebounding impact rate called it right, far closer than the other method, but not really close.
5 cases: adjusted rebounding rate was better twice and the difference in their individual rebounding rates was better three times. The average absolute difference was smaller by about 1 rebound % for the difference in their individual rebounding rates.
The difference in the average defensive rebound rate could be found for each of opponent sets (and even strength of their opponents adjusted) to see how much affect difference in opponents played on these numbers. I won't do that here and now (fairly time-consuming) but it could be done and should be done before any strong conclusion. Obviously more than 5 test cases (and done fully) would help too. But this much does suggest that the actual outcomes for particular lineups may not be neatly or reliably picked by either method. Of course one would expected some bounciness in the data when most of the sample sizes are only a few hundred minutes.
Using the average of the two methods, I think would have done slightly better on an absolute difference basis than difference in their individual rebounding rates alone. Probably better to use a blend over either one alone. Where they agree, you probably can predict fairly well. Where they disagree, the average will probably do better.
The adjusted rebounding impact set seems like the best guide for overall rebounding impact for a player across the entire set of lineups but so far it does not seem very good for projecting change of one player with the same 4 other guys. There is general regression to the team defensive rebounding mean but not necessarily for a particular lineup.
Difference in their individual rebounding rates was slightly better in this mini-test (as far as it was run) but not simply and steadily reliable either. May be too much to think that this can be predicted reliably for particular lineups, even the very biggest ones.
Using either method or a blend probably would do pretty well at season aggregate data level. Want more defensive rebounding? Put players with better ratings out there. Which would do better at this level- adjusted rebounding difference or raw rebounding difference? I dunno. Would need more time & incentive to study it fully. Unlikely for me at this moment.
But in the 5 cases, the average net actual performance of the lineup with the better rebounder of the pair exceeded the expected by difference in raw defensive rebounding rates by a bit more than 1 %point. Not diminishing returns on average, in fact on average increasing... in the very small test. The team actual defensive rebounding % exceeded what was expected by net difference in adjusted defensive rebounding by more than 4 points.
Maybe these aren't typical cases. (Might be different with guards.) They seemed like decent test cases to start with though. Whether they are or are not an unusual draw, I wonder if adjusted rebounding has been regressed too much to the mean. Maybe the adjusted rebounding estimates are proper from a talent level or prediction standpoint. Time consuming and messy and demanding of care and caution to push beyond the simple statements and arguments. Probably worthwhile if one does it (especially someone working for a team) then steps back and makes some sensible decisions about how to use the understanding / ambiguity. And adjusts over time based on the performance results, more data, more analysis and perhaps more understanding.
Re: Value of a Rebound
Crow, thanks for summarizing Jerry's study. Was that thread lost? Did he report an R^2? I can't remember now. That seems like the most obvious way to get a handle on the predicted vs. actual outcome.
Re: Value of a Rebound
The original thread here is still available.
http://sonicscentral.com/apbrmetrics/viewtopic.php?t=5
But the data on his site is not. (I used a saved copy.)
I don't have the time to think about this further right now.
http://sonicscentral.com/apbrmetrics/viewtopic.php?t=5
But the data on his site is not. (I used a saved copy.)
I don't have the time to think about this further right now.