Page 2 of 6
Re: The Flaws of Synergy for Defensive Rankings
Posted: Sun May 06, 2012 11:06 am
by EvanZ
Hmm..there shouldn't be any difference. Perhaps, a sign on one of the factors is off in your calculation. Or I made a mistake somewhere. I'll double check.
Re: The Flaws of Synergy for Defensive Rankings
Posted: Sun May 06, 2012 11:16 am
by EvanZ
Double checked. The numbers seem to work for me. Crow you probably have a sign wrong somewhere.
Re: The Flaws of Synergy for Defensive Rankings
Posted: Sun May 06, 2012 3:41 pm
by bchaikin
what does the "Off+Def per 200" column mean/represent?
It's a rating. Read it just like you would adjusted +/-.
It is an estimate that Carter has been providing strong positive impact on offense and defense.
if this is then in fact an individual player rating, here's my question. just what specifically did vince carter do (not what his team did when he played) such that he is rated as either the 3rd best SG or the 3rd best SF in the league?...
i ask because on offense he (1) shot overall worse than both just the league average SG and the league average SF, and (2) he was a worse than average offensive rebounder for an SG and a poor offensive rebounder for a SF...
on defense he was a worse than average defensive rebounder for a SF, and just an average shot blocker for a SF. his rate of turnovers forced (steals plus offensive fouls drawn) was nothing special for either an SG or a SF...
so unless this rating methodology considers carter an outstanding defender outside of steals, blocks, and defensive rebounder, just what did he specifically do on offense or defense to rate such a high rating?...
Re: The Flaws of Synergy for Defensive Rankings
Posted: Sun May 06, 2012 3:42 pm
by Crow
Ok, sorry Evan about raising an issue that doesn't exist. The sign is different on forced turnovers and I didn't have that right. (Needed more light to see it.) With that corrected the numbers match, the way they should.
On offense there are many things Carter may have done to help team eFG%. Space the court, cut frequently and well, be a shooting and scoring threat that creates additional space for Dirk and Terry, make the right pass and keep it moving in general, run the plays correctly, fill a lane on the break (even if he wasn't the scorer, that can help), etc. For what exactly he did, one would go to the tapes or a team's counting sheet for these actions based on prior review of the tapes (if they keep them).
It may not be common to recognize player impact on teammate eFG% beyond assists but there are plenty of things players do that affect teammate shooting efficiency indirectly and RAPM is the only tool that tries to pick it up. The eFG% RAPM impact Factor for a player is actually a measure of his direct impact thru his own shooting and his indirect impact on teammates. One can deduct the former from the total eFG% impact to isolate the indirect impact on teammates (and do the same for the other Factors).
On defense, it has already been clearly stated that the big thing that defensive RAPM picks up that the boxscore doesn't is shot defense impact, though there are several parts to it- 1 on 1 defense, help defense and transition defense. Kathoro also mentioned other stuff such as ball denial and play disruption, draining time (which hurts eFG% on average) and perhaps fostering later turnovers. And steering the ball to below average locations and baiting shots from below average locations and below average shooters at those spots. Boxing out also figures in beyond actual rebounding.
Re: The Flaws of Synergy for Defensive Rankings
Posted: Tue May 08, 2012 1:18 pm
by Crow
Shift what I was talking about on page 1 to a comparison of A4PM and RAPM. There are discrepancies there and it is worth looking at.
Re: The Flaws of Synergy for Defensive Rankings
Posted: Tue May 08, 2012 10:19 pm
by YaoPau
bchaikin wrote:
if this is then in fact an individual player rating, here's my question. just what specifically did vince carter do (not what his team did when he played) such that he is rated as either the 3rd best SG or the 3rd best SF in the league?...
i ask because on offense he (1) shot overall worse than both just the league average SG and the league average SF, and (2) he was a worse than average offensive rebounder for an SG and a poor offensive rebounder for a SF...
on defense he was a worse than average defensive rebounder for a SF, and just an average shot blocker for a SF. his rate of turnovers forced (steals plus offensive fouls drawn) was nothing special for either an SG or a SF...
so unless this rating methodology considers carter an outstanding defender outside of steals, blocks, and defensive rebounder, just what did he specifically do on offense or defense to rate such a high rating?...
IMO you're right to be skeptical of it. Ridge regressions aren't going to drop MSE values dramatically compared to APM. They'll help a bit, and I actually think it's really impressive how good the rankings look given 66 games of data, but you're still dealing with standard errors and bias, so I wouldn't take a single estimate at face value.
Re: The Flaws of Synergy for Defensive Rankings
Posted: Tue May 08, 2012 10:53 pm
by bchaikin
IMO you're right to be skeptical of it...
it's not that i'm skeptical. it doesn't even
get to that point. it's the concept of "...here's this rating...", it's true...
statements like this don't help:
it has already been clearly stated that the big thing that defensive RAPM picks up that the boxscore doesn't is shot defense impact
because while it may have been clearly stated, it certainly is not
clear...
and then when asked for specifics on a particular player rating, you get statements like:
there are many things Carter may have done
For what exactly he did, one would go to the tapes or a team's counting sheet for these actions based on prior review of the tapes
i can give a number of statistically based reasons why vince carter is not the 3rd best SG or SF in the league, but would love to hear the
specific reasons why the developer of this rating system thinks he is...
if you have a rating system that rates a player, but the system then has absolutely no idea why that
specific player rates so well (or poor), what good is it?
kind of reminds me of this:
http://waynewinston.com/wordpress/?p=108
it's tough to trust a rating system and its various renditions when you ask specific questions about specific players' ratings - but never get specific answers...
Re: The Flaws of Synergy for Defensive Rankings
Posted: Tue May 08, 2012 11:44 pm
by YaoPau
bchaikin wrote:if you have a rating system that rates a player, but the system then has absolutely no idea why that specific player rates so well (or poor), what good is it?
APMs are different from other metrics.
Here's an example that might clear things up... imagine I give you 100 coins and I tell you they're all biased in some way, and your job is to determine the probability of each coin landing tails, and you only have a limited time to figure this out. You could go about this a couple ways... either (1) start flipping coins and watch for patterns. For example, maybe you notice a clear correlation between size of dents on the heads side and coins landing on tails more often. Then you could create estimates based on measuring dent sizes. Or (2) you can flip each coin say, 20 times, record the amount of times it landed tails, and create a point estimate with a standard error around it based on its distribution. Some of the point estimates will be exactly right, but some will be too low, and some will be too high. It doesn't make it a bad system, you just have errors in your estimate because you were only able to observe 20 flips out of infinite possibilities.
Both systems have their own strengths and weaknesses. System 1 is like PER / WinShares, and the player rankings of those systems are pretty good for the most part imo. System 2 is like APM. And just like how seeing only 20 flips led to errors, it turns out that seeing only 66 games of play-by-play data isn't enough to get really precise estimates of player efficiency.
My answer for why Carter rates so well is it's probably just due to sampling error. And that doesn't make APM a bad stat. Let's add to the goofy example and say you had to pick 10 coins with a high tails% for a competition, and millions of dollars were on the line. You could just pick the 10 with the largest head-side dents. Or you could just pick the 10 that had the highest tails% in your 20 flip samples. Or... you could pick the 10 with the highest tails% in your 20 flip samples that also had large head-side dents.
Re: The Flaws of Synergy for Defensive Rankings
Posted: Tue May 08, 2012 11:47 pm
by EvanZ
bchaikin wrote:
it's tough to trust a rating system and its various renditions when you ask specific questions about specific players' ratings - but never get specific answers...
I think it's a true quandary.
If I tell you exactly the "things" you want to know, can you tell me exactly how much each "thing" is worth (i.e. in terms of winning)?
APBR is not SABR. We cannot quantify each "thing" so precisely. So what to do?
Re: The Flaws of Synergy for Defensive Rankings
Posted: Wed May 09, 2012 12:25 am
by Crow
I expected your reply Bob, carefully cropping and ignoring all the specifics provided of what he may have done on offense and defense. I gave the best answer I could give, the best answer the data allows me to give. It may or may not have been worth the time.
Here the specifics again, because I feel like highlighting what you cropped out:
On offense there are many things Carter may have done to help team eFG%. [Space the court, cut frequently and well, be a shooting and scoring threat that creates additional space for Dirk and Terry, make the right pass and keep it moving in general, run the plays correctly, fill a lane on the break (even if he wasn't the scorer, that can help), etc. For what exactly he did, one would go to the tapes or a team's counting sheet for these actions based on prior review of the tapes (if they keep them).
It may not be common to recognize player impact on teammate eFG% beyond assists but there are plenty of things players do that affect teammate shooting efficiency indirectly and RAPM is the only tool that tries to pick it up. The eFG% RAPM impact Factor for a player is actually a measure of his direct impact thru his own shooting and his indirect impact on teammates. One can deduct the former from the total eFG% impact to isolate the indirect impact on teammates (and do the same for the other Factors).
On defense, it has already been clearly stated that the big thing that defensive RAPM picks up that the boxscore doesn't is shot defense impact, though there are several parts to it- 1 on 1 defense, help defense and transition defense. Kathoro also mentioned other stuff such as ball denial and play disruption, draining time (which hurts eFG% on average) and perhaps fostering later turnovers. And steering the ball to below average locations and baiting shots from below average locations and below average shooters at those spots. Boxing out also figures in beyond actual rebounding.
I've said this many times before, including recently at Evan's blog: factor APM (by Evan as A4PM) gives 8 estimates of specific areas of player impact. And if you pay attention to the last sentence of the middle paragraph I note how one can turn those 8 into 16. If 8 or 16 different estimates of where a player is having impact is not enough for you then the metric and approach is just not for you, probably because you don't want to find or acknowledge the existence and relatively easy possibility of the detailed estimates that you say you are interested in. That reminds me of Berri's posture on APM. Simplify, attack parts, ignore or dismiss all efforts to improve the metric and to explain what it can provide at the factor level. Yes these are estimates but really everything in the boxscore is an estimate of true talent as well over whatever sample size. If it is just one season, it is going to have about as much error (compared to true talent or future performance) as APM. But boxscore advocates almost never talk about that. They want RAPM to appear to be the only metric class with acknowledged error and to stigmatize it for that. They also hardly even discuss or have a worthwhile comeback for the huge gaping holes in it, on offense and defense, except "we don't have easy, exact data to provude those things". At least RAPM factors and RAPM factors split into direct and indirect portions actually attempts to estimate and gives one something to start with, something to use to try to interpret alongside the other available input. It is can be used or attacked and dismissed. As you wish. I am going to continue to look at it alongside everything else and look at it because of the estimates it provides that the boxscore does not.
Re: The Flaws of Synergy for Defensive Rankings
Posted: Wed May 09, 2012 1:31 am
by bchaikin
Is 16 different estimates of where a player is having impact is not enough for you then the metric and approach is just not for you...
oh i get it - if i don't trust the results it's only because i don't understand the process?? sorry but you have repeatedly responded with generalized answers to specific questions...
you are focusing on the methodology - i am focusing on it's results. just because i ask specific questions about specific players does not mean i am attacking the methodology. i am simply looking for clarification...
probably because you don't want to find the detailed estimates that you say you are interested in...
are detailed estimates the same as player specifics? is asking for player specifics not the same as trying to find detailed estimates?...
what part of specifics don't you understand? in trying to understand the ratings posted (and APM as a whole) i am asking simple, direct, straight forward questions concerning certain players. the ratings show vince carter as the 3rd rated SG or SF. my direct, simple, straight forward question is why? what did he specifically do to rate so high?...
let's try this again - this same rating system, if i am reading the posting correctly, has matt bonner rated above all PFs except dirk nowitzki. so here is my direct question - what did matt bonner specifically do such that he rates higher than PFs like kevin love, blake griffin, lamarcus aldridge, pau gasol, etc.?...
here is why i ask - he was very efficient on offense (high overall shooting with very low turnovers), but scored at a low rate (13 pts/40min), rarely got fouled, was a very poor offensive rebounder, and got few steals and blocks...
My answer for why Carter rates so well is it's probably just due to sampling error...
is matt bonner's rating also due to a sampling error?...
look, i am trying to understand the results of APM. so is it just this version of APM that has these two players rated so highly? or does APM and it's many variations as a whole - because the processes are somewhat similar - also rate them very high?...
If I tell you exactly the "things" you want to know, can you tell me exactly how much each "thing" is worth?
i'll certainly try - what exactly did either vince carter or matt bonner do specifically such that they rate better than many players most would think are better?...
We cannot quantify each "thing" so precisely
then what exactly do the results of APM tell us? if player A has a better APM rating than player B, is it not proper protocol to ask specificaly why?...
Re: The Flaws of Synergy for Defensive Rankings
Posted: Wed May 09, 2012 3:12 am
by YaoPau
bchaikin wrote:is matt bonner's rating also due to a sampling error?...
Again, my opinion, but I think it's pretty likely.
There aren't many players like Bonner, but low usage, 3pt shooting SF/PFs usually are rated
a bit above average, and not top 10 in the NBA offensively.
To get a sense of how many players make the top 50ish due to sampling error, take a look at the
top 1-year APM guys from the 2010-2011 season. You see a bunch of random names who don't show up near the top of in this year's rankings.
Re: The Flaws of Synergy for Defensive Rankings
Posted: Wed May 09, 2012 3:22 am
by EvanZ
It's not necessarily sampling error. I've calculated individual year RAPM ratings for the last several seasons and Bonner has had a consistently high rating every year. In Bonner's case, it's likely the case that Pop uses him in exactly the right way to maximize his 3-pt shooting.
Is that so surprising? Pop's a good coach. 3-pt shooting is valuable and he knows it. Now, Beno Udrih, that might be sampling error. His ratings for the 3 previous seasons were negative.
Re: The Flaws of Synergy for Defensive Rankings
Posted: Wed May 09, 2012 3:32 am
by bchaikin
It's not necessarily sampling error. I've calculated individual year RAPM ratings for the last several seasons and Bonner has had a consistently high rating every year. In Bonner's case, it's likely the case that Pop uses him in exactly the right way to maximize his 3-pt shooting.
so then what are you saying? are you saying matt bonner this season was better than all PFs but dirk nowitzki in terms of helping his team win games (on a per minute basis)?...
Is that so surprising?
if - and i say if - that is what you are saying, then yes it's very surprising...
Re: The Flaws of Synergy for Defensive Rankings
Posted: Wed May 09, 2012 3:34 am
by YaoPau
I'd like to look at those individual year ratings.
You could be right about Bonner, who knows, nothing is for sure, and there aren't many PFs making 42%+ of their 3s and playing good D. But #2 overall would be very surprising to me... being rated consistently positive makes more sense, and that's where I'd expect him to be.