Defensive Adjusted Plus/Minus Ratings (Rosenbaum 2005)

Home for all your discussion of basketball statistical analysis.
Post Reply
Crow
Posts: 10565
Joined: Thu Apr 14, 2011 11:10 pm

Defensive Adjusted Plus/Minus Ratings (Rosenbaum 2005)

Post by Crow »

page 1 of 5

Author Message
Dan Rosenbaum



Joined: 03 Jan 2005
Posts: 541
Location: Greensboro, North Carolina

PostPosted: Thu Jul 28, 2005 3:15 pm Post subject: Defensive Adjusted Plus/Minus Ratings Reply with quote
On my blog I have posted a few of the results from my latest adjusted plus/minus ratings analysis.

http://danrosenbaum.blogspot.com/2005/0 ... usted.html

Quote:
Defense for the big men: what do the adjusted plus/minus ratings say?
By Dan T. Rosenbaum

If I had a dime for every time I heard that you can't measure defense with stats, I would be a rich man. (Well maybe not rich, but I might have enough money for a nice dinner.)

Steals, blocks, and defensive rebounds - they give us only a snapshot of what a player does on defense. We would like to have more and better data to measure defense. One direction is to collect better defensive statistics, an effort that is being spearheaded by Roland Beech at 82games.com.

Another approach is to use plus/minus statistics to measure how a team defends when a player is in the game versus when he is not. It would seem odd to say that a player was a good defender when his team defended better when he was on the bench than when he was in the game.

Now, of course, it is important to account for who a player is playing with and against. Playing beside Ben Wallace might even make me appear to be a good defender. For that reason I compute adjusted plus/minus ratings that account for who a player is playing with and against. These adjusted plus/minus ratings can then be broken down into their offensive and defensive components.

In "Measuring How NBA Players Help their Teams Win" I describe the gory details of how I compute these adjusted plus/minus ratings. (I have made a few changes since then, along with adding another year of data.) It takes a lot of data for adjusted plus/minus ratings to tell us anything useful and for that reason it is useful to ask another question. What is the average adjusted plus/minus rating of players similar to a given player? Answering this question can give me a second estimate of a players' defensive productivity and help combat errors from adjusted plus/minus ratings due to lack of data.

So combining ratings of defense from a players' own adjusted plus/minus rating and that of players similar to him, which players are the best defenders? I list the best and worst by position among players playing 1,000 or more minutes in 2004-05. These ratings are predictions for the 2005-06 season assuming that younger players will improve their defense and older players may see a decline in their defense.

I apologize for not providing more details about this (including the full lists), but I want to let a situation play out before I give away the store. But I can talk more about the methodology and respond to questions and complaints.
Back to top
View user's profile Send private message Send e-mail Visit poster's website Yahoo Messenger
HoopStudies



Joined: 30 Dec 2004
Posts: 705
Location: Near Philadelphia, PA

PostPosted: Thu Jul 28, 2005 4:03 pm Post subject: Re: Defensive Adjusted Plus/Minus Ratings Reply with quote
Dan Rosenbaum wrote:
On my blog I have posted a few of the results from my latest adjusted plus/minus ratings analysis.

http://danrosenbaum.blogspot.com/2005/0 ... usted.html

Quote:
....
So combining ratings of defense from a players' own adjusted plus/minus rating and that of players similar to him, which players are the best defenders? I list the best and worst by position among players playing 1,000 or more minutes in 2004-05. These ratings are predictions for the 2005-06 season assuming that younger players will improve their defense and older players may see a decline in their defense.

I apologize for not providing more details about this (including the full lists), but I want to let a situation play out before I give away the store. But I can talk more about the methodology and respond to questions and complaints.


I like that you did what I highlighted in bold. In my proactive job, I feel like forecasting is more valuable than explaining the past. At the same time, however, there are always the questions of "How did he adjust for it?" For now, I'm glad that you did it. It's a mentality that should be emphasized. This is a business and forecasts are important in business, not debating whether Jordan was better than Magic (though that's fun).

A couple things.

- You should give some rough sense of the magnitude of difference between the best and worst. Is it worth 10 ppg or 1 ppg?
- When you say that Nick Collison is uncertain because of one year of data, you should say the same thing for Matt Bonner.
- You might want to say that the results are based upon as many as 3 years (is it 3?) of data for the players when you state that they are projections.
- What is your general confidence interval on these guys? Could #1 also be #5? Could #1 be #30?
- What is the relative magnitude above average for the best D guys? Is it more or less or about the same as the relative magnitude above average for the best O guys? (related to the first point, but I'm thinking off the cuff)
- It's a little weird that so many Blazer fans tuned in. But as I look at it, it is weird to have the disparity in Blazer #s. Ratliff and Przybilla near the top. Abdur-Rahim and Randolph near the bottom.
_________________
Dean Oliver
Author, Basketball on Paper
The postings are my own & don't necess represent positions, strategies or opinions of employers.
Back to top
View user's profile Send private message Visit poster's website
kbche



Joined: 19 Jul 2005
Posts: 51
Location: washington d.c.

PostPosted: Thu Jul 28, 2005 6:46 pm Post subject: Defensive Player Ratings Reply with quote
Hi Dan,

How do you explain the absence of Shaq?
Back to top
View user's profile Send private message Send e-mail
Eli W



Joined: 01 Feb 2005
Posts: 402


PostPosted: Thu Jul 28, 2005 9:49 pm Post subject: Reply with quote
I don't really understand the theory behind combining adjusted plus/minus with statistical plus/minus. I see SPM as a way of estimating APM from boxscore stats. Why would you want to add an imperfect estimate into your ultimate ranking?

What does it mean if a player has a higher APM than SPM? That he does positive things that the boxscore doesn't capture? If so, then adding in SPM would seem to unfairly penalize this player.

What does it mean if a player has a higher SPM than APM? Do certain players consistently have higher SPM than APM (or vice versa)?

When you have lots of data to work from (several years), will SPM be unneccesary?
Back to top
View user's profile Send private message Send e-mail Visit poster's website
Dan Rosenbaum



Joined: 03 Jan 2005
Posts: 541
Location: Greensboro, North Carolina

PostPosted: Thu Jul 28, 2005 10:24 pm Post subject: Reply with quote
John Quincy wrote:
I don't really understand the theory behind combining adjusted plus/minus with statistical plus/minus. I see SPM as a way of estimating APM from boxscore stats. Why would you want to add an imperfect estimate into your ultimate ranking?

What does it mean if a player has a higher APM than SPM? That he does positive things that the boxscore doesn't capture? If so, then adding in SPM would seem to unfairly penalize this player.

What does it mean if a player has a higher SPM than APM? Do certain players consistently have higher SPM than APM (or vice versa)?

When you have lots of data to work from (several years), will SPM be unneccesary?

Thank you very much for this question. I get it a lot, and I don't think I have ever answered it that well. I will give it another try.

In terms of the issues, you have hit the nail on the head. The problem is that plus/minus ratings are noisy and they bounce around a lot from game to game or month to month or even season to season. This is true of regular plus/minus ratings and adjusted plus/minus ratings.

This is a particularly big issue with players that tend to play together a lot. Detroit's starters play a lot of minutes together, so statistically it is hard to break apart the individual ratings. For the Bulls Ben Gordon and Tyson Chandler played together a lot. During the few times when Gordon was in but Tyson was not, the Bulls usually played very good defense. For that reason Ben Gordon is going to get most of the credit for the good defense during the times when they were playing together, even if it was Chandler who was responsible for the good defense. It may be the case that Gordon truly is an elite defender, but it also possible that he just got lucky in the small sample of observations without Chandler.

This is where the statistical plus/minus ratings can come in. The issue here is that we just don't have a large enough and varied enough sample of games for Chandler and Gordon. So what can we do to supplement that data? We can look for players like Chandler and Gordon and see what their adjusted plus/minus ratings look like. If we see that the adjusted plus/minus and statistical plus/minus ratings give us roughly the same result, we can be pretty confident in our results. If not, we need to ask more questions. Does Ben Gordon do a lot of good defensive things that are not picked up in the box score? Could he have just gotten lucky?

In a lot of cases the statistical plus/minus ratings might be a more accurate predictor of future defensive performance. If all I wanted to do was characterize past defensive performance, then adjusted plus/minus is what I want. But I want to use these data to tell me which players are going to be good or bad defenders in the future. For that a combination of the adjusted and statistical plus/minus ratings will do a better job.

For the players that made 1st and 2nd All-Defensive Teams, here are the average adjusted and statistical plus/minus rating percentiles.

1st Team
Adjusted (82), Statistical (91)

2nd Team
Adjusted (58), Statistical (77)

The fact that the statistical percentiles are on average higher could be because the coaches who vote for these teams are too overly concerned with stats like steals and blocks. Or it could mean that the adjusted plus/minus ratings are pretty noisy and the statistical plus/minus ratings do a better job dealing with that noise.

I do not think more years of data will make the statistical plus/minus ratings suprefluous. It will help in getting more precise estimates of adjusted plus/minus ratings, but players change over time and some players aren't in the league that long, so there are limits to how much this will help.

One other point that I would like to make is that I do not just use defensive stats in my statistical plus/minus ratings. I use the offensive stats as well. Some of them are not very important, but assists often are pretty important. It appears that especially among big guys, players who can pass tend to be better defenders - perhaps because they tend to do a better job on help defense.

The point here is not that assists have any significant direct effect on defense, but I am using them to identify a particular type of player. And what the statistical adjusted plus/minus ratings are measuring are the average adjusted plus/minus ratings of various types of players. This results in players like Bruce Bowen and Tayshaun Prince, players without gaudy defensive stats, having higher statistical plus/minus ratings than adjusted plus/minus ratings.

Now if I found that a player consistently had a higher statistical plus/minus rating than adjusted plus/minus rating, then I probably would start to attribute that to the player being overrated by the statistical plus/minus rating. But the results are rarely that clean as the adjusted plus/minus ratings tend to bounce around quite a bit.
Back to top
View user's profile Send private message Send e-mail Visit poster's website Yahoo Messenger
Kevin Pelton
Site Admin


Joined: 30 Dec 2004
Posts: 979
Location: Seattle

PostPosted: Thu Jul 28, 2005 11:32 pm Post subject: Reply with quote
I find Jarron Collins' consistent adjusted plus-minus performance to be utterly fascinating. Hollinger called him "extremely limited" last year based on his individual statistics, and I frankly don't find him all that impressive when watching. Yet without the benefit of blocking many shots, he rates as a better defender than many big-time shot blockers.

Here is where I think the data Roland plans to collect will be valuable -- not so much for determining which players are good, but explaining discrepancies in the data. Watching Collins play a lot would probably do the same, but this way is a lot more efficient for me.
Back to top
View user's profile Send private message Send e-mail Visit poster's website
Dan Rosenbaum



Joined: 03 Jan 2005
Posts: 541
Location: Greensboro, North Carolina

PostPosted: Fri Jul 29, 2005 3:14 am Post subject: Re: Defensive Adjusted Plus/Minus Ratings Reply with quote
HoopStudies wrote:
- You should give some rough sense of the magnitude of difference between the best and worst. Is it worth 10 ppg or 1 ppg?
It is about 6.6 points per 40 minutes for the centers and 6.7 points per 40 minutes for the power forwards.

Quote:
- When you say that Nick Collison is uncertain because of one year of data, you should say the same thing for Matt Bonner.
- You might want to say that the results are based upon as many as 3 years (is it 3?) of data for the players when you state that they are projections.

Yes and yes.
Quote:
- What is your general confidence interval on these guys? Could #1 also be #5? Could #1 be #30?

These estimates are typically not precise enough to distinguish #1 and #5, but are precise enough to reasonably distinguish #1 and #30.
Quote:
- What is the relative magnitude above average for the best D guys? Is it more or less or about the same as the relative magnitude above average for the best O guys? (related to the first point, but I'm thinking off the cuff)

The variances for defensive and offensive ratings appear to be about the same.
Quote:
- It's a little weird that so many Blazer fans tuned in. But as I look at it, it is weird to have the disparity in Blazer #s. Ratliff and Przybilla near the top. Abdur-Rahim and Randolph near the bottom.

Miles is also going to end up towards the top and Stoudamire and Van Exel are near the bottom. That is one weird team.
Back to top
View user's profile Send private message Send e-mail Visit poster's website Yahoo Messenger
Dan Rosenbaum



Joined: 03 Jan 2005
Posts: 541
Location: Greensboro, North Carolina

PostPosted: Fri Jul 29, 2005 3:18 am Post subject: Reply with quote
Shaq is the top center overall and he is a good but not great defensive center. He struggles with the pick and roll and getting up and down the court.

It is Jason Collins of the Nets and not Jarron Collins of the Jazz who is the good defender. Jarron rates as a little bit below average defending center.
Back to top
View user's profile Send private message Send e-mail Visit poster's website Yahoo Messenger
Kevin Pelton
Site Admin


Joined: 30 Dec 2004
Posts: 979
Location: Seattle

PostPosted: Fri Jul 29, 2005 9:33 am Post subject: Reply with quote
Uh-oh, I've been watching too much WNBA if I'm getting my Collinses mixed up.
Back to top
View user's profile Send private message Send e-mail Visit poster's website
Dan Rosenbaum



Joined: 03 Jan 2005
Posts: 541
Location: Greensboro, North Carolina

PostPosted: Fri Jul 29, 2005 12:42 pm Post subject: Reply with quote
I had some time to look into how "noisy" these adjusted plus/minus ratings are.

Using players who played 1,000 or more minutes in both of the last two seasons, here is the distribution of the absolute value of the difference in these ratings in the last two seasons.

Overall Ratings:

25th Percentile: 0.57 points per 40 minutes
50th Percentile: 1.44 points per 40 minutes
75th Percentile: 2.88 points per 40 minutes
90th Percentile: 4.43 points per 40 minutes

Offensive Ratings:

25th Percentile: 0.53 points per 40 minutes
50th Percentile: 1.23 points per 40 minutes
75th Percentile: 2.09 points per 40 minutes
90th Percentile: 3.09 points per 40 minutes

Defensive Ratings:

25th Percentile: 0.54 points per 40 minutes
50th Percentile: 1.14 points per 40 minutes
75th Percentile: 1.71 points per 40 minutes
90th Percentile: 2.55 points per 40 minutes

These ratings are more stable than I have often given them credit for. Remember not all of this above is "noise." Some of it is the natural progression and digression of players' skills as they age and gain experience.

With these results I am reconsidering putting more weight on the adjusted plus/minus ratings in my combined rating.
Back to top
View user's profile Send private message Send e-mail Visit poster's website Yahoo Messenger
JasonNapora



Joined: 06 Jan 2005
Posts: 6


PostPosted: Fri Jul 29, 2005 1:53 pm Post subject: Reply with quote
Dan: Great stuff. Any chance you'll be releasing your complete lists?

Also, I think it makes a lot of sense that guys that pass well are better on defense (especially help defense), since both skills are subsets of the same core skill - court awareness. It's certainly something I'd never thought of before, though, which is one of the great things about this community...
Back to top
View user's profile Send private message
HoopStudies



Joined: 30 Dec 2004
Posts: 705
Location: Near Philadelphia, PA

PostPosted: Fri Jul 29, 2005 2:05 pm Post subject: Reply with quote
Dan Rosenbaum wrote:
I had some time to look into how "noisy" these adjusted plus/minus ratings are.

Using players who played 1,000 or more minutes in both of the last two seasons, here is the distribution of the absolute value of the difference in these ratings in the last two seasons.


To clarify, these are differences from one season to the next?
_________________
Dean Oliver
Author, Basketball on Paper
The postings are my own & don't necess represent positions, strategies or opinions of employers.
Back to top
View user's profile Send private message Visit poster's website
Dan Rosenbaum



Joined: 03 Jan 2005
Posts: 541
Location: Greensboro, North Carolina

PostPosted: Fri Jul 29, 2005 2:27 pm Post subject: Reply with quote
HoopStudies wrote:
Dan Rosenbaum wrote:
I had some time to look into how "noisy" these adjusted plus/minus ratings are.

Using players who played 1,000 or more minutes in both of the last two seasons, here is the distribution of the absolute value of the difference in these ratings in the last two seasons.


To clarify, these are differences from one season to the next?
Yes.
Back to top
View user's profile Send private message Send e-mail Visit poster's website Yahoo Messenger
Dan Rosenbaum



Joined: 03 Jan 2005
Posts: 541
Location: Greensboro, North Carolina

PostPosted: Fri Jul 29, 2005 4:08 pm Post subject: Reply with quote
JasonNapora wrote:
Dan: Great stuff. Any chance you'll be releasing your complete lists?

Also, I think it makes a lot of sense that guys that pass well are better on defense (especially help defense), since both skills are subsets of the same core skill - court awareness. It's certainly something I'd never thought of before, though, which is one of the great things about this community...

I need to let a situation play out before I decide how I am going to put more information out there.

Thinking about stats as an indicator of a skill set is a big departure from the accounting-type approach that thinks about stats as a contribution. Tendex, PER, DeanO's offensive efficiency, and Bob's simulation method all work from a base assumption that what we need to do is account for the value of an assist, rebound, steal, made shot, etc. and work from there. This methodology is rooted in what our baseball brethren have done for years and so it is a fundamental aspect of our methodologies.

But that is why what I am suggesting is a big change. Basketball is not like baseball. Lots of key contributions are not captured in our statistics, so what we want to know when evaluating players is not the average value of an assist, but the average value of an assister. If assisters tend to be horrible defenders (they aren't), it is possible that players with more assists would be less valuable than players with fewer assists (after accounting for the other statistics).

That may seem wrong since assists surely have value, but that is not the point. The point is we want to measure the value of the assist plus everything else that differs between players with different numbers of assists. And that is not a theoretical question; it is an empirical question that can only be estimated using data. Also, these relationships could change over time, so the value of an assister may change over time.

I would love to hear what folks have to say about this.
Back to top
View user's profile Send private message Send e-mail Visit poster's website Yahoo Messenger
Kevin Pelton
Site Admin


Joined: 30 Dec 2004
Posts: 979
Location: Seattle

PostPosted: Fri Jul 29, 2005 4:37 pm Post subject: Reply with quote
I think that line of thinking connects well with our discussion of valuing statistics in another recent thread.

To quote myself (I know, how asinine):

Quote:
he unfortunate thing about the Moneyball comparison is that isolating these effects in basketball is levels of magnitude more difficult. Obviously you want to balance, say, guys who get on base versus power hitters, but it's not like having too many power hitters would actually cause them to play worse, whereas that's a possibility in the NBA.


You can't add five guys' PERs and really find out that much about how they'll fare together because that's just not how basketball works. There's way too much interplay, and that's where DanVAL comes in handy. Ultimately, when the sample size becomes larger, I think the analysis can extend to similar five-man lineups as well as similar players.

The other factor is there are some skills (shot creation, for one) that can't really be measured except in the context of team performance.

I'm forgetting the most difficult skill to value by traditional statistics, which is passing. PER, Dean's work ... all just guesses when it comes to assists. Logical ones perhaps, but guesses nonetheless.

I guess that's a long way of saying I agree.

page 3

Author Message
bchaikin



Joined: 27 Jan 2005
Posts: 690
Location: cleveland, ohio

PostPosted: Mon Aug 01, 2005 1:00 pm Post subject: Reply with quote
occasionally the rockets of the mid-1980s played a front line of akeem olajuwon (7'0"), ralph sampson (7'4"), and jim peterson (6'10"), and the cavaliers of the early 1990s employed a frontcourt of brad daugherty (7'0"), hotrod williams (6'11"), and larry nance (6'10")...
Back to top
View user's profile Send private message Send e-mail Visit poster's website
kjb



Joined: 03 Jan 2005
Posts: 865
Location: Washington, DC

PostPosted: Mon Aug 01, 2005 1:21 pm Post subject: Reply with quote
Hey, the Wizards sometimes used Kwame Brown (7-0), Brendan Haywood (7-0) and Jared Jeffries (6-11) together.
Back to top
View user's profile Send private message AIM Address Yahoo Messenger
Dan Rosenbaum



Joined: 03 Jan 2005
Posts: 541
Location: Greensboro, North Carolina

PostPosted: Tue Aug 02, 2005 4:15 am Post subject: Reply with quote
tenkev wrote:
I've had a question about Dan's adjusted plus/minus rating for some time now, and I guess I'll ask it now. The Margin formula he uses includes one variable for each player on the floor and one for home court advantage.

I was wondering if there shouldn't be a variable that represents the particular combination of players on the court at any given time. There are different skills that each team needs, and after are certain point there are diminishing returns for that skill. For example, if you have five Ben Wallaces out on the floor, your team is going to be terrible. So the adjusted plus/minus for each will be very low. But if instead you had Ben Wallace on the court with the Phoenix Suns his adjusted plus/minus would be very high.

It seems to me adding a variable for the particular combination would explain away some variation in your results.

Good question. These data can be used to look at player combinations, although it would not be feasible to include a separate variable for every combination of players (or even the most likely combination of players). But if I could ever find the time, I would love to examine how players play with different types of teammates. I think this could shed a lot of light on how to best construct teams and lineups.
Back to top
View user's profile Send private message Send e-mail Visit poster's website Yahoo Messenger
gkrndija



Joined: 20 Feb 2005
Posts: 64


PostPosted: Wed Aug 03, 2005 12:48 pm Post subject: Reply with quote
These adjusted +/- numbers are always an eye-opener. Even though I don't completely understand the calculations, I always look forward to reading more about them because the concept makes a lot of sense. Thanks for sharing the information.....

Some comments/questions about the latest system/ratings:

It's funny to see that Marc Jackson ranked so low on the list considering O'Brien benched Dalembert in favour of him for defensive purposes, especially early in the year.

Are you saying that Aaron McKie will still be an above-average defender next year despite his drastically scaled down minutes from this past season?

When calculating errors, is it less likely for the error to be near the extreme end of the range? For example in Table 1 of your May 30/04 Winval article, Ray Allen has a Pure Adjusted +/- 9.0 with a SE of 2.3. Is there a better chance he's + 10.0 pts per 100 poess. than + 11.3 pts per 100 poess?

I always found the pure adjusted +/- ratings be more interesting than the overall or statistical +/- ratings because they seems to focus so much on the intangibles. However when looking at past overall adjusted +/- ratings, the alpha value usually places a lot more value in the statistical adjusted +/-. Do alpha values help help make the ratings more 50/50 this year or do the overall ratings still lean a lot towards statistical +/- ratings. Who are the best pure adjusted +/- players?

What would make Bruce Bowen so good in the statistical +/- ratings? His rebounding, block and steals are all poor for his amount of minutes.
Back to top
View user's profile Send private message
Dan Rosenbaum



Joined: 03 Jan 2005
Posts: 541
Location: Greensboro, North Carolina

PostPosted: Wed Aug 03, 2005 1:33 pm Post subject: Reply with quote
gkrndija wrote:
Are you saying that Aaron McKie will still be an above-average defender next year despite his drastically scaled down minutes from this past season?

Yes, that is what I am saying, although he is also a pretty bad offensive player.

Quote:
When calculating errors, is it less likely for the error to be near the extreme end of the range? For example in Table 1 of your May 30/04 Winval article, Ray Allen has a Pure Adjusted +/- 9.0 with a SE of 2.3. Is there a better chance he's + 10.0 pts per 100 poess. than + 11.3 pts per 100 poess?

Yes, it is more likely that the "true" value is closer to the estimate of 9.0. If I assume that the standard errors are normally distributed (probably a pretty good assumption in this case), then we would have the following expectations.

38% of time less than 0.5 standard errors from the "truth"
30% of time between 0.5 and 1.0 standard errors from the "truth"
27% of time between 1.0 and 2.0 standard errors from the "truth"
5% of time more than 2.0 standard errors from the "truth"

Quote:
I always found the pure adjusted +/- ratings be more interesting than the overall or statistical +/- ratings because they seems to focus so much on the intangibles. However when looking at past overall adjusted +/- ratings, the alpha value usually places a lot more value in the statistical adjusted +/-. Do alpha values help help make the ratings more 50/50 this year or do the overall ratings still lean a lot towards statistical +/- ratings. Who are the best pure adjusted +/- players?

Yes, I am weighting the pure adjusted plus/minus ratings more heavily this year, especially for defense. I am likely to move more in that direction over time as the pure adjusted plus/minus ratings get more precisely estimated.

Quote:
What would make Bruce Bowen so good in the statistical +/- ratings? His rebounding, block and steals are all poor for his amount of minutes.

He plays a lot of minutes and does not do much on offense. Those kinds of players tend to have very good defensive adjusted plus/minus ratings. Remember this is not saying that playing a player more and having him shoot less makes him a better defender. It is saying that players the coaches stick out on the floor despite them not doing much on offense tend to be good defenders. In fact, this can be interpreted as evidence that coaches are able to identify good defenders who don't do a lot on offense. Because if they were not able to, these players would not, on average, have good defensive adjusted plus/minus ratings.
Back to top
View user's profile Send private message Send e-mail Visit poster's website Yahoo Messenger
Ben



Joined: 13 Jan 2005
Posts: 266
Location: Iowa City

PostPosted: Wed Aug 03, 2005 2:51 pm Post subject: Reply with quote
Dan Rosenbaum wrote:

He plays a lot of minutes and does not do much on offense. Those kinds of players tend to have very good defensive adjusted plus/minus ratings. Remember this is not saying that playing a player more and having him shoot less makes him a better defender. It is saying that players the coaches stick out on the floor despite them not doing much on offense tend to be good defenders. In fact, this can be interpreted as evidence that coaches are able to identify good defenders who don't do a lot on offense. Because if they were not able to, these players would not, on average, have good defensive adjusted plus/minus ratings.


Very interesting. I wonder if this would help high percentage shooting specialists too.
Back to top
View user's profile Send private message
tenkev



Joined: 31 Jul 2005
Posts: 20
Location: Memphis,TN

PostPosted: Wed Aug 03, 2005 3:14 pm Post subject: Reply with quote
Dan Rosenbaum wrote:
Good question. These data can be used to look at player combinations, although it would not be feasible to include a separate variable for every combination of players (or even the most likely combination of players). But if I could ever find the time, I would love to examine how players play with different types of teammates. I think this could shed a lot of light on how to best construct teams and lineups.


Why would it not be feasible to include a separate variable for every combination? Sample size?
Back to top
View user's profile Send private message Send e-mail AIM Address
Dan Rosenbaum



Joined: 03 Jan 2005
Posts: 541
Location: Greensboro, North Carolina

PostPosted: Wed Aug 03, 2005 4:46 pm Post subject: Reply with quote
tenkev wrote:
Dan Rosenbaum wrote:
Good question. These data can be used to look at player combinations, although it would not be feasible to include a separate variable for every combination of players (or even the most likely combination of players). But if I could ever find the time, I would love to examine how players play with different types of teammates. I think this could shed a lot of light on how to best construct teams and lineups.


Why would it not be feasible to include a separate variable for every combination? Sample size?

There are dozens of player combinations each year and probably hundreds for each team over a three year period. That would result in several thousand parameter estimates that would need to be estimated. Most of the parameters would be very imprecisely estimated.

As it is, I save the residuals from the regression, so I could look at different combinations of players and see what the residuals look like on average.
Back to top
View user's profile Send private message Send e-mail Visit poster's website Yahoo Messenger
GFunk911



Joined: 03 Aug 2005
Posts: 7


PostPosted: Wed Aug 03, 2005 11:53 pm Post subject: Reply with quote
In your blog post, you said this

Quote:
And remember I am accounting for who a player is playing with and against and for garbage/clutch time play. So for these adjusted plus/minus ratings it does not matter who a player's substitute is, like it does with unadjusted plus/minus ratings.


Let's put aside the statistical rating component for a minute. Isn't this statement not true? For example, in one year (~2003), John Stockton was the Utah starter, and Mark Jackson was the backup. Together, they played very close to 48 min/game, and they hardly ever played together. The system can determine a rating difference between Stockton and Jackson, which will be reasonably accurate, and it can also determine an absolute rating for Stockton and Jackson, which will be incredibly noisy.

Instead of elaborating further, I would be thrilled if you could talk about this issue. Keep up the great work.
Back to top
View user's profile Send private message
GFunk911



Joined: 03 Aug 2005
Posts: 7


PostPosted: Wed Aug 03, 2005 11:58 pm Post subject: Reply with quote
Dan Rosenbaum wrote:
tenkev wrote:
I've had a question about Dan's adjusted plus/minus rating for some time now, and I guess I'll ask it now. The Margin formula he uses includes one variable for each player on the floor and one for home court advantage.

I was wondering if there shouldn't be a variable that represents the particular combination of players on the court at any given time. There are different skills that each team needs, and after are certain point there are diminishing returns for that skill. For example, if you have five Ben Wallaces out on the floor, your team is going to be terrible. So the adjusted plus/minus for each will be very low. But if instead you had Ben Wallace on the court with the Phoenix Suns his adjusted plus/minus would be very high.

It seems to me adding a variable for the particular combination would explain away some variation in your results.

Good question. These data can be used to look at player combinations, although it would not be feasible to include a separate variable for every combination of players (or even the most likely combination of players). But if I could ever find the time, I would love to examine how players play with different types of teammates. I think this could shed a lot of light on how to best construct teams and lineups.


The system is meant to determine "what happened," while your proposed modification, apart from issues of implementation or usefullness, is trying to determnie why it happened. If you put Nick Van Exel at center, his +/- will be horrid. That is "what happened." Why it happened is a different issue.
Back to top
View user's profile Send private message
Dan Rosenbaum



Joined: 03 Jan 2005
Posts: 541
Location: Greensboro, North Carolina

PostPosted: Thu Aug 04, 2005 12:19 am Post subject: Reply with quote
GFunk911 wrote:
In your blog post, you said this

Quote:
And remember I am accounting for who a player is playing with and against and for garbage/clutch time play. So for these adjusted plus/minus ratings it does not matter who a player's substitute is, like it does with unadjusted plus/minus ratings.


Let's put aside the statistical rating component for a minute. Isn't this statement not true? For example, in one year (~2003), John Stockton was the Utah starter, and Mark Jackson was the backup. Together, they played very close to 48 min/game, and they hardly ever played together. The system can determine a rating difference between Stockton and Jackson, which will be reasonably accurate, and it can also determine an absolute rating for Stockton and Jackson, which will be incredibly noisy.

Instead of elaborating further, I would be thrilled if you could talk about this issue. Keep up the great work.

The situation with Stockton and Jackson that you bring up is important. What happens in a circumstance like that described above is that the standard errors get very large, i.e. the estimates are really noisy. And that is exactly what happened with Stockton and Jackson in 2002-03.

But whether Jackson was effective or ineffective will not have a systematic effect on Stockton's rating (like it would with something like the Roland Rating); this is more of a standard error issue.

Technically, substitutes don't really bias the rating like they do with the Roland Rating, but substitution patterns can result in imprecise estimates with large standard errors.

Great point. Thanks for bringing it up.
Back to top
View user's profile Send private message Send e-mail Visit poster's website Yahoo Messenger
tenkev



Joined: 31 Jul 2005
Posts: 20
Location: Memphis,TN

PostPosted: Thu Aug 04, 2005 9:51 am Post subject: Reply with quote
GFunk911 wrote:


The system is meant to determine "what happened," while your proposed modification, apart from issues of implementation or usefullness, is trying to determnie why it happened. If you put Nick Van Exel at center, his +/- will be horrid. That is "what happened." Why it happened is a different issue.


True, but if you can determine "why it happened", then you can more easily make predictions about what will happen and that is a product you can sell.
Back to top
View user's profile Send private message Send e-mail AIM Address
GFunk911



Joined: 03 Aug 2005
Posts: 7


PostPosted: Thu Aug 04, 2005 10:31 pm Post subject: Reply with quote
Dan Rosenbaum wrote:
GFunk911 wrote:
In your blog post, you said this

Quote:
And remember I am accounting for who a player is playing with and against and for garbage/clutch time play. So for these adjusted plus/minus ratings it does not matter who a player's substitute is, like it does with unadjusted plus/minus ratings.


Let's put aside the statistical rating component for a minute. Isn't this statement not true? For example, in one year (~2003), John Stockton was the Utah starter, and Mark Jackson was the backup. Together, they played very close to 48 min/game, and they hardly ever played together. The system can determine a rating difference between Stockton and Jackson, which will be reasonably accurate, and it can also determine an absolute rating for Stockton and Jackson, which will be incredibly noisy.

Instead of elaborating further, I would be thrilled if you could talk about this issue. Keep up the great work.

The situation with Stockton and Jackson that you bring up is important. What happens in a circumstance like that described above is that the standard errors get very large, i.e. the estimates are really noisy. And that is exactly what happened with Stockton and Jackson in 2002-03.

But whether Jackson was effective or ineffective will not have a systematic effect on Stockton's rating (like it would with something like the Roland Rating); this is more of a standard error issue.

Technically, substitutes don't really bias the rating like they do with the Roland Rating, but substitution patterns can result in imprecise estimates with large standard errors.

Great point. Thanks for bringing it up.


Thanks again for responding.

I would agree that, using the statistical definition, this situation will not result in biased rankings. In the fantasy case, where every player combination gets infinite minutes, the starting PG will have a 100% accurate rating, regardless of backup. In the Stockton case, your system will produce unbiased rankings for Stockton and Jackson with a very large standard deviation. Do you use some kind of dampening factor, to try and smooth out the noise in this situation? Dampening has both advantages and disadvantages. I would prefer dampening, because then results that deviate far from average have a greater degree of confidence.

Regarding what you wrote, your distinction between the substitute and the substitution pattern is a great way of putting it. When you made your original statement about substitutes not mattering, the Stockton-Jackson case immediately popped into my head, as it was the bane of my existence.

I've got another question, if you would be kind enough to take the time to answer. In your post, you mentioned how All-Defense players have, on average, higher statistical +/- rating.

My instinct would be that the All-Defense team is a mix of players with lots of blocks and steals, and Bowen-style players, and that the "stats players" are a higher proportion. This, combined with the very loose corrolation between All-Defense teams and actual defensive ability, would result in higher statistical defensive +/-.

However, you stated that, for example, Prince had a higher statistical ranking, which goes against this. This is due jsut as much to his lowest than expected Adjusted +/-, which makes it tough to determine whether he is just an overrated defender with decent defensive stats, or whether the statistical ranking is better showing his ability.

Anyway, I've been rambling off topic. My question is, have you done any kind of year-to-year test, to see which rating (the Adjusted or the Statistical) is a better predictor of future ratings (both Adjusted and Statistical)? This would be a very interesting data point in the discussion over the merits of the statistical ranking. If you haven't, would you agree that this might be a useful test?

Keep up the great work.

EDIT: With the Stockton-Jackson situation, if you use "damping," and, for example, add an entry into your matrix for every 2-player combination that shows 30 minutes of play in which the player were equal, you will reduce the "noise" by generating estimates that "default to average" in the abscence of sufficient data. In this case, the quality of substitute will absolutely affect the ranking of the starter.

Last edited by GFunk911 on Thu Aug 04, 2005 11:28 pm; edited 1 time in total
Back to top
View user's profile Send private message
GFunk911



Joined: 03 Aug 2005
Posts: 7


PostPosted: Thu Aug 04, 2005 10:41 pm Post subject: Reply with quote
tenkev wrote:
GFunk911 wrote:


The system is meant to determine "what happened," while your proposed modification, apart from issues of implementation or usefullness, is trying to determnie why it happened. If you put Nick Van Exel at center, his +/- will be horrid. That is "what happened." Why it happened is a different issue.


True, but if you can determine "why it happened", then you can more easily make predictions about what will happen and that is a product you can sell.


Oh, absolutely. I didn't mean to apply that determine "why it happened" wasn't a valuable goal; it would be immensely valuable. By grouping players into player types and assessing similarities between players, you can create data groups with greater number of minutes and really start to analyse what player TYPE combinations are good and which ones are bad. I was just trying to say that the goal behind your proposed modification would be more applicable as a seperate system, or as a next step, not as an addition to the existing system.
Back to top
View user's profile Send private message
Ben F.



Joined: 07 Mar 2005
Posts: 391


PostPosted: Fri Aug 05, 2005 9:58 am Post subject: Reply with quote
I read Dan's latest piece on 82games and just had a quick comment regarding the Ben Gordon/Tyson Chandler situation.

Gordon has a high adjusted +/- but a low statistical +/-, and Chandler is the opposite. Players with stats similar to Chandler's are very good defenders, and with stats similar to Gordon's are not. It would seem to me that since this is such a noisy situation (as you said in your article), and the subjective viewpoints side with Chandler, that a reasonable conclusion to draw is that it is Chandler's defense that is being picked up, not Gordon's.
Last edited by Crow on Thu May 12, 2011 6:10 pm, edited 3 times in total.
Crow
Posts: 10565
Joined: Thu Apr 14, 2011 11:10 pm

Re: Defensive Adjusted Plus/Minus Ratings

Post by Crow »

page 4

Author Message
GFunk911



Joined: 03 Aug 2005
Posts: 7


PostPosted: Fri Aug 05, 2005 10:10 am Post subject: Reply with quote
FFSBasketball wrote:
I read Dan's latest piece on 82games and just had a quick comment regarding the Ben Gordon/Tyson Chandler situation.

Gordon has a high adjusted +/- but a low statistical +/-, and Chandler is the opposite. Players with stats similar to Chandler's are very good defenders, and with stats similar to Gordon's are not. It would seem to me that since this is such a noisy situation (as you said in your article), and the subjective viewpoints side with Chandler, that a reasonable conclusion to draw is that it is Chandler's defense that is being picked up, not Gordon's.


That was my first thought, and Dan mentioned it, as you said. The only problem is that Chandler's adjusted ratings seem fairly consistent for the last 3 years, it's not just a one year blip. Of course Chandler hardly played last year and was only 19 or 20 two years ago, it's certainly plausable that he matured since then and became a dominant defender (which is being masked by the noise this year). From all reports, he is a dominant or near-dominant defender. Just some food for thought.
Back to top
View user's profile Send private message
Kevin Pelton
Site Admin


Joined: 30 Dec 2004
Posts: 978
Location: Seattle

PostPosted: Fri Aug 05, 2005 10:11 am Post subject: Reply with quote
Chandler's defense might not be picked up because of Gordon this year, but Chandler's defense wasn't picked up during the previous two seasons either. What accounts for that?
Back to top
View user's profile Send private message Send e-mail Visit poster's website
GFunk911



Joined: 03 Aug 2005
Posts: 7


PostPosted: Fri Aug 05, 2005 12:01 pm Post subject: Reply with quote
admin wrote:
Chandler's defense might not be picked up because of Gordon this year, but Chandler's defense wasn't picked up during the previous two seasons either. What accounts for that?


I can't tell if you're responding to me or the previous poster, but I'll respond anyway. Given his consistency, the most likely possibility seems to be that his 2004-2005 adjusted rating is relatively accurate. I put out a theory that would reconcile his ratings with being a dominant defender in 04-05. It's not the most likely explaination but it's certainly plausable.

It's probably a combination of the 2. Chandler was hurt last year, and assumedly he is maturing relatively rapidly (compared to other rookies and young players). Most likely, he has experienced some real improvement over the last two years, but significantly less than suggested by his statistical rating or reputation. Some of that improvement is being masked by the "Gordon effect."
Back to top
View user's profile Send private message
bchaikin



Joined: 27 Jan 2005
Posts: 684
Location: cleveland, ohio

PostPosted: Fri Aug 05, 2005 9:46 pm Post subject: Reply with quote
i have a question about the august 2005 82games article about individual player defense:

http://www.82games.com/rosenbaum3.htm

are the entire lists for each position displayed anywhere? or complete lists of these ratings by team somewhere? is it on the 82games website or on your homepage?...

also is there anyway to combine individual player ratings to get a team type rating, so as to compare the totals of each team to something like each team's defensive points per possession? in other words would adding up each individual player's rating on a team normalized to each player's minutes played equal the team actual per game point differential for the season?...
Back to top
View user's profile Send private message Send e-mail Visit poster's website
Dan Rosenbaum



Joined: 03 Jan 2005
Posts: 541
Location: Greensboro, North Carolina

PostPosted: Sat Aug 06, 2005 12:02 am Post subject: Reply with quote
bchaikin wrote:
i have a question about the august 2005 82games article about individual player defense:

http://www.82games.com/rosenbaum3.htm

are the entire lists for each position displayed anywhere? or complete lists of these ratings by team somewhere? is it on the 82games website or on your homepage?...

also is there anyway to combine individual player ratings to get a team type rating, so as to compare the totals of each team to something like each team's defensive points per possession? in other words would adding up each individual player's rating on a team normalized to each player's minutes played equal the team actual per game point differential for the season?...

As I mentioned at the top of this thread . . .

Quote:
I apologize for not providing more details about this (including the full lists), but I want to let a situation play out before I give away the store.

I realize that this makes it harder to evaluate this methodology. The ratings do add up in the way that you mention above, but I understand if you are skeptical without being able to check it yourself.
Back to top
View user's profile Send private message Send e-mail Visit poster's website Yahoo Messenger
bchaikin



Joined: 27 Jan 2005
Posts: 684
Location: cleveland, ohio

PostPosted: Sat Aug 06, 2005 12:23 pm Post subject: Reply with quote
I realize that this makes it harder to evaluate this methodology. The ratings do add up in the way that you mention above, but I understand if you are skeptical without being able to check it yourself....

now, now, you are assuming again.... on the contrary i am not the least bit skeptical at all, just the opposite. i (and i think most here) welcome any attempts to rate players defensively, and from what i've seen from your top 10/bottom 10 listings i concur with the vast majority of them, based on how i rate players defensively (also using roland's data)....

i only mention about adding up the ratings of the players on the same team because this is exactly how i have to do it for individual player defensive FG%s (after accounting for blocked shots) for the simulation. if the numbers don't add up to how the team performed defensively, then adjustments have to be made or the sim doesn't work properly...

take for example eddy curry. you have him rated defensively as poor (or rather in the bottom 10 of Cs). yet 82games has the opposing team Cs shooting an eFG% of just .446 when he was in the game, and the league average eFG% for Cs last year was .496, meaning curry, if we assume he was always playing C and always guarding the opposing C, held his counterparts to -5.0% shooting from the league average. of course this can easily be a poor assumption (and where individual game charting will assuredly help) because of switching man-to-man defenses and zone defenses...

now CHI had the league's best/lowest def FG% as a team, and also had the league's best def FG% once you subtract out blocked shots (which the simulation accounts for separately). so the above excellent defense about curry does not surprise me. but i can easily see how your rating system would rate him low despite this because he gets few steals, is a poor defensive rebounder, and blocks less shots than what the average C did in 04-05. these are all core components of a player's overall defense and an individual's def FG% is just one part of it...

CHI had a def eFG% of .456 when he was in the game, and a def eFG% of .452 when he wasn't, little difference, but an off eFG% of .487 when he was in the game and just .459 when he wasn't. so just from this info it would appear curry is a big positive factor. yet the team +/- was -6.0 when he was on the floor, so obviously other factors came into play and this is where your rating system can be key...

for example when curry was in the game 82games shows CHI committing 3 more turnovers per 48 min on offense - which we can partly attribute to curry because his TOs/touch is very high - yet also forcing 3 less turnovers per 48 min, which we do not have individual stats for yet (turnovers forced by individual players). your system does indeed incorporate this (rather the end result of this), and that is most important for properly rating players defensively.....
Back to top
View user's profile Send private message Send e-mail Visit poster's website
Dan Rosenbaum



Joined: 03 Jan 2005
Posts: 541
Location: Greensboro, North Carolina

PostPosted: Sat Aug 06, 2005 2:06 pm Post subject: Reply with quote
Bob, nice analysis of Eddy Curry. He is a player of such contrasts that it is easy to come to misleading conclusions about him if one looks at just one or two statistics. Your analysis does a good job of demonstrating that exact point.
Back to top
View user's profile Send private message Send e-mail Visit poster's website Yahoo Messenger
NickS



Joined: 30 Dec 2004
Posts: 384


PostPosted: Sat Aug 06, 2005 7:53 pm Post subject: Reply with quote
I agree. Bob's discussion of Eddy Curry is an excellent summary of the statistics that we have to measure defense and the difficulty of finding anything conclusive with those statistics.

I also think Bob's comment that a proper rating of defense should be set up so that the defensive ratings of the players add up to the defensive rating of the team.

I was re-reading basketball on paper today and Dean Oliver makes a similar point about assigning credit for team success. It forces you to keep your accounting honest if you insist that the total credit you assign to individuals adds up to the performance of the team.
Back to top
View user's profile Send private message
bballfan72031



Joined: 13 Feb 2005
Posts: 54


PostPosted: Thu Aug 11, 2005 5:18 pm Post subject: Reply with quote
First off, I loved the defensive ratings, those were really interesting.

I apologize if someone's said this, but I just don't remember reading it earlier....


When Gordon and Chandler were on the court together, their opponents had a 31.375% close pct.

When Gordon alone was on the court, their opponents had a 37% close pct.

When Chandler alone was on the court, their opponents had a 28.4% close pct.



Could part of the reason that the ratings are so strange be because teams were afraid to drive to the basket when Chandler was in the game, and in result shot a lot of threes? And it just happened to be that the opponents randomly were making 3s at a high rate with Chandler was in the game?


I'm not sure if this could be true, but maybe this along with some other factors could have helped to make Chandler's ratings lower than expected and Gordon's higher.

This is, of course, assuming I'm understanding close pct. correctly.


I dunno, just a thought. I'd like to see the 3-point numbers when Gordon and/or Chandler are on the court.



(Hope you all don't mind if I bump this up)
Back to top
View user's profile Send private message AIM Address
Dan Rosenbaum



Joined: 03 Jan 2005
Posts: 541
Location: Greensboro, North Carolina

PostPosted: Sat Aug 13, 2005 2:48 pm Post subject: Reply with quote
sportsfan, this is an interesting analysis. And I am sure there is some randomness like that you point out that might be the reason Gordon rates so well. But I bet he also is a little better defender than he has gotten credit for. Maybe he is good at recognizing which shots to give a player and which ones not to. Maybe he is good at closing out on three point shooters. That kind of thing could be hard to pick up with the naked eye - perhaps even for a coach.

So often we focus on whether a guy can beat someone off the dribble as THE measure of good perimeter defense. That is only a small part of good perimeter defense. There are probably some guys who finish so poorly that you are better off letting them beat you.
Back to top
View user's profile Send private message Send e-mail Visit poster's website Yahoo Messenger
tmansback



Joined: 12 Aug 2005
Posts: 129


PostPosted: Sat Aug 13, 2005 3:29 pm Post subject: Reply with quote
Dan Rosenbaum wrote:
There are probably some guys who finish so poorly that you are better off letting them beat you.


A perfect example of that back in the day was Nick Van Exel vs Gary Payton. GP was so good at stopping penetration that he would go for every NVE fake. NVE though was so horrible finishing around the hoop that letting him drive would have been the proper move. Instead Gary would shut down his drive like Nick wanted and Nick would pull back and shoot the 3. Nick probably was the guy that had the most success against Gary even when he was at DPOY form.
Back to top
View user's profile Send private message
junkball



Joined: 24 Jun 2005
Posts: 4


PostPosted: Sun Aug 14, 2005 8:42 pm Post subject: Reply with quote
Hello

Reading over the adjusted +/- article and thought it was a good idea. Although, I'm wondering for this equation,

MARGIN = b0 + b1X1 + b2X2 + . . . + bKXK + e

This was fit separately for every different team? (30 regressions)
Wouldn't standardized regression coefficients be the best way to go? and

The simple correlation of a player with MARGIN would also be important to examine. If a player doesn't have a high simple correlation, then I definitely wouldn't be trading for them.

I read the defensive ratings, and was wondering a bit about the methodology...sorry if this has been asked many times.

Was the ranking value created by replacing "MARGIN" with "opposing team's points against"? Edit: looking over the articles, this doesn't appear to be the case. However, would this be a good way of assessing purely defensive capability?

Thanks, in advance, for any answers. I enjoy reading these articles.
Back to top
View user's profile Send private message
Dan Rosenbaum



Joined: 03 Jan 2005
Posts: 541
Location: Greensboro, North Carolina

PostPosted: Sun Aug 14, 2005 9:26 pm Post subject: Reply with quote
junkball wrote:
Hello

Reading over the adjusted +/- article and thought it was a good idea. Although, I'm wondering for this equation,

MARGIN = b0 + b1X1 + b2X2 + . . . + bKXK + e

This was fit separately for every different team? (30 regressions)
Wouldn't standardized regression coefficients be the best way to go? and

The simple correlation of a player with MARGIN would also be important to examine. If a player doesn't have a high simple correlation, then I definitely wouldn't be trading for them.

I read the defensive ratings, and was wondering a bit about the methodology...sorry if this has been asked many times.

Was the ranking value created by replacing "MARGIN" with "opposing team's points against"? Edit: looking over the articles, this doesn't appear to be the case. However, would this be a good way of assessing purely defensive capability?

Thanks, in advance, for any answers. I enjoy reading these articles.

Separate regressions for all 30 teams would not make sense here. All of the data is pooled together.

The "simple correlation" would probably be something like the Roland Rating. There are several reasons why a player may have a low Roland Rating, but it would not tell us much about their effectiveness. So I am not sure of the reasoning behind your statement.

Accounting for the other players on the floor, I can measure how a given player affects the point differential between the two teams and the total points scored by the two teams. From that I can derive offensive and defensive ratings.

I don't think any rating measures "capability," but I think it does measure how effective a player has been in a particular role. And since players generally do not dramatically change roles from year to year, that is very useful to measure.
Back to top
View user's profile Send private message Send e-mail Visit poster's website Yahoo Messenger
mtamada



Joined: 28 Jan 2005
Posts: 376


PostPosted: Mon Aug 15, 2005 9:27 am Post subject: Reply with quote
NickS wrote:

I also think Bob's comment that a proper rating of defense should be set up so that the defensive ratings of the players add up to the defensive rating of the team.


This would probably not be a good idea. It would be great if basketball, or for that matter life, were so simple that we could simply add up parts' values to get the value of the whole. But basketball is almost certainly a nonlinear system, and you can't simply add up the values of such a system to evaluate the whole.

The simplest and clearest example of this is scoring: if you take 5 players such as Iverson, Kobe, LeBron, Nowitzki, and Stoudemire, who each averaged 26-30 points per game last year, would the resulting team average 140 points per game? Certainly not. We can't simply add up players' scoring average and get a good prediction of the team scoring average.

Now it's true enough that we could look at last year's Spurs' scoring averages, add them up, and the result would exactly match the Spurs' actual team scoring. But that's 20-20 hindsight; and a poor procedure for trying to predict what players and teams will do in 2006 (unless the personnel on the team stays exactly the same).

Similarly, it's unlikely that, even if we had good defensive ratings of individual players, the resulting team defense could be calculated by simplistically adding the individual players' ratings.

What would the correct formula be? It might be so complicated that there is no formula, and some sort of simulation might be needed. At a minimum, it would probably have to account for various complications such as: most likely, centers' defensive capabilities are more important than other players' defensive capabilities; a probable synergy where having 5 good defenders on the court is more than 5 times better than having just one good defender (because the opposition can simply work away from the good defender and pass the ball to the other 4 players; it also reduces the need for double-teaming); etc.

It is true enough that a self-consistent model needs a mechanism wherein the individual players' attributes somehow lead to the actual team attributes. But that mechanism is unlikely to be something so simplistic as a simple sum.

This is, as I've said before, IMO the holy grail of hoopstats. How do players' individual characteristics (as measured either by direct statistics or theoretical derived measures) combine into team characteristics? Be it scoring, rebounding, defense, or whatever, the answer, if we ever get there, will be likely be a complex one. Or possibly one that has to be answered via simulation rather than formulas. Actually I have an idea for measuring rebounding which is fairly simple (but more complex than simple addition), but I haven't had time to see if it actually works well.
Back to top
View user's profile Send private message
Ben F.



Joined: 07 Mar 2005
Posts: 391


PostPosted: Mon Aug 15, 2005 9:51 am Post subject: Reply with quote
mtamada wrote:
This is, as I've said before, IMO the holy grail of hoopstats. How do players' individual characteristics (as measured either by direct statistics or theoretical derived measures) combine into team characteristics?
What I believe to be a big part of (at least on the offensive end) seeing how players would interact with each other is who uses more possessions, who uses less possessions, and how efficiency is affected by this. Thus my questions in this thread. Perhaps you'd help out on the paper Dan proposed?

page 5

Author Message
junkball



Joined: 24 Jun 2005
Posts: 4


PostPosted: Mon Aug 15, 2005 1:08 pm Post subject: Reply with quote
Dan Rosenbaum wrote:
Separate regressions for all 30 teams would not make sense here. All of the data is pooled together.


Ah, I see. That's one heck of a data set.

Quote:
The "simple correlation" would probably be something like the Roland Rating. There are several reasons why a player may have a low Roland Rating, but it would not tell us much about their effectiveness. So I am not sure of the reasoning behind your statement.


The rule of thumb for multiple regression is look at the Beta values and at the simple correlation...I was presuming this also applies to basketball.

- I'm not certain why one would use b values instead of beta values, for Winval. Beta values account for differences in scale, between IVs. Might not make much of a difference because all variables are 1's and 0's, but still...

zMargin = B1*zx1 +B2*zx2 + ... + Bk*zxk

is my understanding of how WinVal rating should be done.

- The b's (WinVal) tell us the unique contribution of a player to the team. The "shared" contribution to team success doesn't say anything conclusive about the one particular player. On the other hand, this shared contribution (reflected in simple correlation) does provide an upper bound on how much a player contributes to team success.


Quote:
Accounting for the other players on the floor, I can measure how a given player affects the point differential between the two teams and the total points scored by the two teams. From that I can derive offensive and defensive ratings.


OK. So there' two regressions: one with MARGIN and one with total points scored. The offensive and defensive ratings being some function of a player's two b values.
Back to top
View user's profile Send private message
Dan Rosenbaum



Joined: 03 Jan 2005
Posts: 541
Location: Greensboro, North Carolina

PostPosted: Mon Aug 15, 2005 1:51 pm Post subject: Reply with quote
junkball wrote:
The rule of thumb for multiple regression is look at the Beta values and at the simple correlation...I was presuming this also applies to basketball.

- I'm not certain why one would use b values instead of beta values, for Winval. Beta values account for differences in scale, between IVs. Might not make much of a difference because all variables are 1's and 0's, but still...

zMargin = B1*zx1 +B2*zx2 + ... + Bk*zxk

is my understanding of how WinVal rating should be done.

- The b's (WinVal) tell us the unique contribution of a player to the team. The "shared" contribution to team success doesn't say anything conclusive about the one particular player. On the other hand, this shared contribution (reflected in simple correlation) does provide an upper bound on how much a player contributes to team success.

Standardizing independent variables is an issue of how you want to present the results. In this case, I do not want the independent variables in standard deviation units. I want them to tell the effect of a player when he is in the game. Unstandardized that is what they tell me. Standardized it would be hard to interpret these coefficients, because a one standard deviation change in this circumstance has less meaning than simply leaving the variable alone and using its natural units.

In practice it always a good idea to look at simple correlations, even if they are biased estimators of what you want - as they are in this case. It is a good way to learn more about the data. But the simple correlations are a combination of the adjusted plus/minus rating that I compute and biases of different sorts. It is not really an "upper" or "lower" bound of any kind. Really it is just another estimate of player effectiveness - albeit one with lots of biases.
Back to top
View user's profile Send private message Send e-mail Visit poster's website Yahoo Messenger
kbche



Joined: 19 Jul 2005
Posts: 51
Location: washington d.c.

PostPosted: Tue Aug 16, 2005 9:31 pm Post subject: Defensive Player Ratings Reply with quote
Hi Dan,

I looked at your top 6 defensive centers, power forwards, small forwards, shooting guards, and point guards. My observations are as follows:

1. Only 3 of the 30 players have ever won a championship (correct me if I am wrong).

2. 23 of the 30 players played on teams in the 04-05 season that made the playoffs.

3. Only 3 of the top 6 small forwards were on playoff teams in the 04-05 season. This was the lowest percentage of all categories (all other categories indicated that 5/6 played on playoff teams).

Thus the top defensive players were generally on the better teams. Are the players making the team, or are the teams making the players? How can we tell from your model?

Have you considered different OLS estimates to improve the goodness of fit? The assists made and points scored are not independent variables and should probably be treated as such in a model. A player in a particular play can not score points and an assist on the same play. Does your model account for this?

You added a appreciation/depreciation factor for players. How was this developed? A player would have to be on the same team with the same team mates for successive years, and the players' performance would have to be normalized.

Kimberly
Back to top
View user's profile Send private message Send e-mail
Dan Rosenbaum



Joined: 03 Jan 2005
Posts: 541
Location: Greensboro, North Carolina

PostPosted: Tue Aug 16, 2005 11:55 pm Post subject: Re: Defensive Player Ratings Reply with quote
kbche wrote:
Hi Dan,

I looked at your top 6 defensive centers, power forwards, small forwards, shooting guards, and point guards. My observations are as follows:

1. Only 3 of the 30 players have ever won a championship (correct me if I am wrong).

2. 23 of the 30 players played on teams in the 04-05 season that made the playoffs.

3. Only 3 of the top 6 small forwards were on playoff teams in the 04-05 season. This was the lowest percentage of all categories (all other categories indicated that 5/6 played on playoff teams).

Thus the top defensive players were generally on the better teams. Are the players making the team, or are the teams making the players? How can we tell from your model?

Have you considered different OLS estimates to improve the goodness of fit? The assists made and points scored are not independent variables and should probably be treated as such in a model. A player in a particular play can not score points and an assist on the same play. Does your model account for this?

You added a appreciation/depreciation factor for players. How was this developed? A player would have to be on the same team with the same team mates for successive years, and the players' performance would have to be normalized.

Kimberly

Interesting analysis on the relationship between my highest rated defensive players and team performance. But I am not sure the patterns are strong enough to tell us much other than that teams with better defensive players tend to be better teams.

Points and assists are not independent, but the whole point of regression is to estimate the partial effect of one variable holding the effect of other variables constant; dependence between independent variables is typically the motivation for running a regression in the first place. There is no need to hold other variables constant if the other variables are independent.

Note that the points and assists variables are only in my model relating box score statistics to the basic adjusted plus/minus ratings. That data is not used to compute the adjusted plus/minus ratings. With the box score data, all that I am trying to do is identify different types of players and say something about what the average adjusted plus/minus ratings are for players of different types. It is not measuring the effect of an assist or point scored, per se.

For more details on how the adjusted plus/minus ratings are estimated, see the detailed piece I wrote on this. I think that some of the points you make about this are possibly mistaken, but I am not quite sure because I am not sure I fully understand what you are trying to say.

http://www.uncg.edu/bae/people/rosenbau ... inval2.htm
Back to top
View user's profile Send private message Send e-mail Visit poster's website Yahoo Messenger
JPOP



Joined: 07 Aug 2005
Posts: 5


PostPosted: Wed Aug 17, 2005 5:36 am Post subject: Re: Defensive Player Ratings Reply with quote
Hey Dan,

I'm pretty impressed with the overall performance of your defensive ratings. I keep finding aberations with multiposition players although I'm only looking at a small sample. Some of these multiposition players defend out of position because of a lack of team depth, always defending the best wing or post player or to match up with the opposition.

There are quite a few players who defend well at their primary position, but don't defend well at a secondary position. Not everyone can provide the multiposition defensive flexibility of Garnett or Kirilenko to be nearly equally effective defending two positions. Just the same they may be a teams best option to defend a certain position for a length of time throughout a game.

These multiposition players are very likely to create some year to year noise defensively as their roles may change with additional personnel on their given team. Primary position defense is much less likely to spike from one year to the next than what is seen by a player having a major shift in the percentage of minutes the player needs to stray from defending their primary position.

There are quite a few examples, but here a few primary ones.

Robert Traylor is an awful defensive player when matched up with centers. At the same time he is more than adequate defending power forwards. His work at center was so bad, it pushed him into the bottom 10.

Eric Snow has made a career out of his defense. With Iverson he often cross matched defensively doing a great job at defending shooting guards. The past year in Cleveland, and generally not needing to play a cross matching game, when he defended shooting guards his defense was stellar, when defending point guards, he was incredibly average.

Darius Miles cracks the top 10 (#3) at small forward likely due to his prowess at defending the rangy fast/quick power forwards of the Western Conference. I guess some coaching genius by Cheeks and an assignment Miles never had in the past. That could explain the somewhat high Standard Error.

It seems to me having some positional ratings for multipositional players goes a lot farther than lumping together their on-court performances. There are too many players who have either offensive or defensive efficiency at one end of the court while squashing it with not enough length or bulk to be effective at the other. This is also the type of information that escapes more than a handful of coaches with their substitution practices.

Again, this is a great set of ratings and has some true promise. I look forward to seeing the whole list some day soon and believe there are quite a few coaches out there who would be shocked if they were aware of some of the shortcomings their substitution patterns bring.
_________________
7 or more interrelated variables become art although I applaud the efforts of those who try to put science behind them.
Post Reply