1) Rankings are dependent on help-defense. Player X is guarding Player O, Player O makes a pass to a wide open teammate who dunks. Player X is charged with 2 points on that play. Player O blows by Player X, a teammate of Player X rotates and blocks the shot. Player X is awarded a play in which 0 points were allowed.
2) Rankings do not track help-defense. Player X rotates to stop the penetration of Player O who has blown by his man. Player O retreats and the eventual possession ends in 0 points. Player X is not given due credit.
3) Rankings are not adjusted for individual match-ups. Player X is recognized as best defender on team; thus, coach assigns him to guard the Best Player O on the other team night after night. Although Player X repeatedly holds Best Player O to less than what Best Player O usually achieves, this value is still repeatedly higher than Average or Garbage Player O.
4) Rankings do not measure ball-denial.
5) Rankings do not measure defensive versatility. Player X is capable of doing an above average job at guarding multiple Player O positions.
Re: The Flaws of Synergy for Defensive Rankings
Posted: Sat May 05, 2012 8:02 pm
by bchaikin
most of what you say is true, but calling these flaws is somewhat unfair - the data is what it is, and does not portend to be anything else. it's the interpretation of the data that is key...
it is still more information than what we had before, and more information is always better. kind of like when 82games.com first came online. before that there wasn't much, but even with it's limitations 82games.com has proven quite valuable. so will the synergy data (heck it's already valuable)...
as a matter of fact the combination of the two (82games/synergy) is very informative...
Rankings do not measure ball-denial...
if you divide FGAs against defenders by their minutes played, you will get an idea of the amount of FGAs against each defender on a per minute (per 36/40min) basis. when you look at this data over several years, it is quite revealing...
Re: The Flaws of Synergy for Defensive Rankings
Posted: Sat May 05, 2012 8:27 pm
by Kathoro
But even if you apply that methodology for ball-denial, there is another thing to consider. Maybe some players are particularly effective at baiting teams to pass to the inefficient scorer they are guarding. They may also be particularly effective at denying the ball to efficient scorers. I think it would be important to distinguish between the two. Ball-denial of an inefficient scorer may even hurt the defense.
Re: The Flaws of Synergy for Defensive Rankings
Posted: Sat May 05, 2012 9:14 pm
by bchaikin
excellent points - and again all true...
but - if you were to try to evaluate player defense based solely on data (i.e. and not video), where would you start?...
for example, if two players on offense both shot 55% on 2pt shots, which player is the "better" shooter? obviously most would say they're the same. but if i told you one took almost all of his FGAs right at the rim, and the other almost all of his FGAs as deep 2s, thanks to another data driven website (http://www.hoopdata.com) we would know that one shot much worse than league average (at the rim league average annually is typically about 60%-61%) and the other much better (deep 2s average is about 40% annually). this then frames that data (a 55% 2pt FG%) in a different light...
who's to say how in the near future synergy (or some other website/service) will further quantify their data. maybe synergy will someday soon break down their PPP and defensive FGM/FGA even further, maybe by specific player guarded on offense...
but i think what they've done is a heckuva good start for defensive evaluation. you simply have to temper their data with what you actually see and/or know...
Re: The Flaws of Synergy for Defensive Rankings
Posted: Sat May 05, 2012 9:39 pm
by Mike G
... for ball-denial, there is another thing to consider...
The bigger consideration, I would think, is that better defenders are generally assigned to the more prolific scorers. And so your defensive ace may be giving up more FGA and points than is the defensive slacker who's guarding the non-scoring-threat.
Re: The Flaws of Synergy for Defensive Rankings
Posted: Sat May 05, 2012 10:33 pm
by Crow
Defensive RAPM in principle does not have any of these fundamental blindspots.
Re: The Flaws of Synergy for Defensive Rankings
Posted: Sat May 05, 2012 10:55 pm
by Kathoro
Is there any place where I can view the Defensive RAPM for the entire league on one nice page, ranked in order?
Defensive RAPM in principle does not have any of these fundamental blindspots.
what does the "Off+Def per 200" column mean/represent? is it a rating or ranking? in particular it shows vince carter with the 3rd highest "Off+Def per 200" among SGs (behind ginobili and wade) or SFs (behind lebron james and luol deng). what is that telling us?...
Re: The Flaws of Synergy for Defensive Rankings
Posted: Sun May 06, 2012 12:53 am
by EvanZ
It's a rating. Read it just like you would adjusted +/-. Jerry uses "per 200 possessions", which to him means 100 offensive and 100 defensive possessions. It's equivalent to how most people rate per 100 possessions, though.
Re: The Flaws of Synergy for Defensive Rankings
Posted: Sun May 06, 2012 1:00 am
by Crow
"what does the "Off+Def per 200" column mean/represent?"
It is overall RAPM estimated impact per 100 game possessions.
"is it a rating or ranking?"
It is a rating; but the list is in descending order based on this value, so there is ranking information available too.
"in particular it shows vince carter with the 3rd highest "Off+Def per 200" among SGs (behind ginobili and wade) or SFs (behind lebron james and luol deng). what is that telling us?..."
It is an estimate that Carter has been providing strong positive impact on offense and defense this season, in the role assigned. He was about +2 or better for the previous 6 years. Very few players are estimated at that level, fewer still in sustained consistent fashion. It is highly likely that he has provided positive impact, on average. His non-prior informed RAPM is even better, in terms of rank. Jerry's other metric comparing the player to other teammates when playing with the same 4 others, shows the team did better with Carter than with any of the other 8 guys who played with a qualifying level of minutes with the same 4 other players. That is a pretty strong case for positive impact. In the regular season, on average.
However, raw team +/- with him on/off in the 3 games of the playoffs is terrible. Small sample, one particular opponent, in a playoff context, apparently not going well. Lots can happen in small sample sizes.
Not wise to judge a metric by one example or a few, especially if one is doing that and using a very small sample size.
Carter does not have above average or elite boxscore stats this season but RAPM analysis of team data with on the court suggests his presence on the court improves team offense and team defense and pretty strongly.
Re: The Flaws of Synergy for Defensive Rankings
Posted: Sun May 06, 2012 1:30 am
by Crow
This is shifting focus sharply, but it seems necessary and worthwhile:
But these 8 ratings don't sum to 2.63 or even fairly close.
What might I be missing, or are these level of analysis really that different in terms of his total estimated impact?
Are they both "done right" but Carter's A4PM impact is just estimated as being more than what it is suggested on average for players with similar factor level data as Carter? If so, should one be confident in this A4PM rating or see this level of analysis discrepancy as a sign that maybe Carter's A4PM ranking might be fairly likely to be too high, this year? How much do you estimate the average errors to be for each approach?
The average absolute difference between the sum of estimated factor impacts and A4PM is 1.9. The crosswalk between these two levels of analysis does not seem that strong or "sound". May be better to rely more on A4PM and use the factor level analysis with a good degree of caution? Or should the broader message be to use both with a lot of caution? Would there be value in calculating a value that was the average of these two levels or an average of these 2 and 1 or more version of RAPM and maybe 1 or more boxscore metric? I'd say yes and yes, with care, but recognizing that these are estimates with error. No surprise there though and it stills seems worth doing and considering (alongside the parts, video, etc.) vs not.
Re: The Flaws of Synergy for Defensive Rankings
Posted: Sun May 06, 2012 3:12 am
by EvanZ
Crow, you need to multiply each factor by a coefficient. It's described here:
Alright, thanks for the tip. I saw the coefficients but didn't stop and fully recognize them as the piece I was missing. I should have but I was fairly low energy and can be foggy some times.
So Carter was very good on the 2 factors with by far the largest coefficients- team eFG% and eFG% allowed. Simple boxscore stats miss part of the first and most or all of the later. Boxscore stats with an adjustment for shot defense are not generally not that accurate about player specific impact on shot defense.
The average absolute difference between a4pm and the sum of the factors for a player, if i did it right this time, is just 0.65. That is pretty close, good enough.
Carter's discrepancy in ratings was smaller than average. Andre Miller's and Kemba Walker's just average.