APBR-DraftExpress 2015 NBA Draft Project
Re: APBR-DraftExpress 2015 NBA Draft Project
Thanks for the links to those articles Jesse. I now remember seeing at least one of them but after a year it became a fuzzy background memory. Very interesting work. I noted your finding that preference for the young and very young was empirically supported but showing signs of being overdone in recent years.
Re: APBR-DraftExpress 2015 NBA Draft Project
So here's my followup writeup. Let me know if this is on point. If not I'll be happy to change it. And if anyone else along the pipeline wants to change it, that's totally fine with me as well. Whatever's easiest for everyone.
Since my models includes high school ranking while several other models do not, those players with strong high school rankings will rate significantly better, and those with weaker or more importantly, no rankings at all, will rate significantly worse. Myles Turner, Cliff Alexander, and Frank Kaminsky are all examples of this phenomenon.
In addition, my model covers data that only goes back to 2002, so the weights and importance of each feature will only be reflective of those players that have been drafted since 2002. This means they will be slightly different than other draft models, if the other draft models are trained on larger or smaller sets of data. Karl Towns, for example, may suffer from my models only being trained on data going back to 2002. Stars have a high amount of leverage on draft models (as they should), and the collection of non-high-school stars (at least as defined by my target variable, a RAPM-WS blend) that have been drafted since 2002 is somewhat guard heavy, (Chris Paul, Stephen Curry, James Harden, Dwayne Wade, etc.) and Karl Towns is missing the benefit of models trained on more star-bigs, players like the great Tim Duncan, who was drafted before 2002. I do not model my players by position, as it appears to degrade model performance.
My model’s final output heavily leans on neural networks, and thus overfitting is a fair criticism. Overfitting is a statistical phenomenon that occurs when models are “greedy” and essentially “memorize” the training data in the pursuit of accuracy, rather than discover actual trends. Some of my base models utilize a technique called boosting, a machine learning technique that aggressively pursues accuracy, and iterates over the training data, each time focusing on the predictions the previous learners had the most trouble with. With every iteration, the learners are improved to increase their predictive power, which eventually leads to impressively accurate results. As you can imagine, this technique can lead to even more overfitting. To alleviate some of these overfitting concerns, I also take input from more stable regression based models, as well as perform a technique known as bootstrap aggregating on some of my neural networks. Bootstap aggregating, or bagging, is a technique this is designed to alleviate overfitting and increase predictive stability by sampling (with replacement) from the original data set and training several neural networks on several different samples, and then averaging the outputs together. Since each individual neural network is trained on its own unique training dataset sampled from the original data, actual trends are more likely to be identified and noise is more likely to be ignored when the network’s outputs are combined for the final prediction. Although overfitting is a very valid concern, I utilize neural networks in my draft model because neural networks can be powerfully accurate.
With regards to feature importance, age, points per 40, steals per 40, true shooting percentage, assists per 40, and high school rank are among the most important features to the base models trained on those players who were drafted, played in the NBA, or participated in the NBA combine. Among the base models trained on all prospects, the most important features are high school rank, points per 40, strength of schedule, minutes per game, free throw rate, and blocks per 40. All of my features are pace adjusted.
The last thing I would do as a GM is take my model predictions as gospel and draft exclusively based on them. Even in a theoretical universe where I believe my model to be infallible (it isn’t) and I believe it always evaluates prospects perfectly (it doesn’t), selecting the best available player isn’t necessarily the way to get maximum return on assets (as weird as that sounds). Thus far, draft models have done little to no work on evaluating player-team fit. I am of the opinion that there is untapped value here to be quantified and exploited. Fit is an area that I hope to see draft models improve on by next year.
But the consideration that is even more important than player-team fit is a prospect’s perceived value. Let’s say, for example, I am on the clock for my first round draft pick. Player A and Player B are available, and my model slightly prefers Player A to Player B. Should I always draft Player A? Not necessarily. What if the market significantly prefers Player B to Player A? The market should be considered as an extremely valuable opinion on who the best player will be. In addition, if I take Player B, the player with the higher market value, Player B will be more liquid and easier to trade than Player A and I may be able to get more return from selecting player B, even though Player A may end up being the better overall player.
If I were a GM, I would consider a model consensus as a valuable, perhaps the most valuable input, but it would be FAR from the only input. There are things that models can miss, and draft models are ALL based on the condition that what has worked in the past will work in the future, and this is a massive assumption. There are also aspects that affect players’ careers that models do not consider at all, aspects like player personality, team culture, player development effectiveness, and more. For this reason, it is still vitally important to consider the input of human scouts, the coaching staff, and other members of the front office.
Also, I've been doing a lot of tinkering. I've changed/transformed some features, and slightly changed the base models considered in my final output. I also added the Euros. I now have what I think is going to be my final predictions before the draft. I shared a csv with Jesse for his tool, and I've also posted number rankings here. Column D is the number I'm using, but I'm presenting 3 for fun.
https://docs.google.com/spreadsheets/d/ ... =561660716
Please let me know if these updates needs to be addressed for DX stuff, and I'd be happy to help in any way needed.
Amp, you mentioned moving the numbers to a 0-100 scale(min-max normalization). It's really easy. We just need everyone's raw numbers.
(xval-min(x))/(max(x)-min(x))*100
Where xval is each player's projected number in each model, min(x) is the smallest projected number in each model, and max(x) is the largest projected number in each model. As other people mentioned, this method captures the fact that the difference between the 1st and 2nd best player isn't always the same as the difference between the 54th and 55th, which is what averaging ranks is essentially assuming.
Really interested to see other Mario H. evaluations because my model, it's not a fan.
Since my models includes high school ranking while several other models do not, those players with strong high school rankings will rate significantly better, and those with weaker or more importantly, no rankings at all, will rate significantly worse. Myles Turner, Cliff Alexander, and Frank Kaminsky are all examples of this phenomenon.
In addition, my model covers data that only goes back to 2002, so the weights and importance of each feature will only be reflective of those players that have been drafted since 2002. This means they will be slightly different than other draft models, if the other draft models are trained on larger or smaller sets of data. Karl Towns, for example, may suffer from my models only being trained on data going back to 2002. Stars have a high amount of leverage on draft models (as they should), and the collection of non-high-school stars (at least as defined by my target variable, a RAPM-WS blend) that have been drafted since 2002 is somewhat guard heavy, (Chris Paul, Stephen Curry, James Harden, Dwayne Wade, etc.) and Karl Towns is missing the benefit of models trained on more star-bigs, players like the great Tim Duncan, who was drafted before 2002. I do not model my players by position, as it appears to degrade model performance.
My model’s final output heavily leans on neural networks, and thus overfitting is a fair criticism. Overfitting is a statistical phenomenon that occurs when models are “greedy” and essentially “memorize” the training data in the pursuit of accuracy, rather than discover actual trends. Some of my base models utilize a technique called boosting, a machine learning technique that aggressively pursues accuracy, and iterates over the training data, each time focusing on the predictions the previous learners had the most trouble with. With every iteration, the learners are improved to increase their predictive power, which eventually leads to impressively accurate results. As you can imagine, this technique can lead to even more overfitting. To alleviate some of these overfitting concerns, I also take input from more stable regression based models, as well as perform a technique known as bootstrap aggregating on some of my neural networks. Bootstap aggregating, or bagging, is a technique this is designed to alleviate overfitting and increase predictive stability by sampling (with replacement) from the original data set and training several neural networks on several different samples, and then averaging the outputs together. Since each individual neural network is trained on its own unique training dataset sampled from the original data, actual trends are more likely to be identified and noise is more likely to be ignored when the network’s outputs are combined for the final prediction. Although overfitting is a very valid concern, I utilize neural networks in my draft model because neural networks can be powerfully accurate.
With regards to feature importance, age, points per 40, steals per 40, true shooting percentage, assists per 40, and high school rank are among the most important features to the base models trained on those players who were drafted, played in the NBA, or participated in the NBA combine. Among the base models trained on all prospects, the most important features are high school rank, points per 40, strength of schedule, minutes per game, free throw rate, and blocks per 40. All of my features are pace adjusted.
The last thing I would do as a GM is take my model predictions as gospel and draft exclusively based on them. Even in a theoretical universe where I believe my model to be infallible (it isn’t) and I believe it always evaluates prospects perfectly (it doesn’t), selecting the best available player isn’t necessarily the way to get maximum return on assets (as weird as that sounds). Thus far, draft models have done little to no work on evaluating player-team fit. I am of the opinion that there is untapped value here to be quantified and exploited. Fit is an area that I hope to see draft models improve on by next year.
But the consideration that is even more important than player-team fit is a prospect’s perceived value. Let’s say, for example, I am on the clock for my first round draft pick. Player A and Player B are available, and my model slightly prefers Player A to Player B. Should I always draft Player A? Not necessarily. What if the market significantly prefers Player B to Player A? The market should be considered as an extremely valuable opinion on who the best player will be. In addition, if I take Player B, the player with the higher market value, Player B will be more liquid and easier to trade than Player A and I may be able to get more return from selecting player B, even though Player A may end up being the better overall player.
If I were a GM, I would consider a model consensus as a valuable, perhaps the most valuable input, but it would be FAR from the only input. There are things that models can miss, and draft models are ALL based on the condition that what has worked in the past will work in the future, and this is a massive assumption. There are also aspects that affect players’ careers that models do not consider at all, aspects like player personality, team culture, player development effectiveness, and more. For this reason, it is still vitally important to consider the input of human scouts, the coaching staff, and other members of the front office.
Also, I've been doing a lot of tinkering. I've changed/transformed some features, and slightly changed the base models considered in my final output. I also added the Euros. I now have what I think is going to be my final predictions before the draft. I shared a csv with Jesse for his tool, and I've also posted number rankings here. Column D is the number I'm using, but I'm presenting 3 for fun.
https://docs.google.com/spreadsheets/d/ ... =561660716
Please let me know if these updates needs to be addressed for DX stuff, and I'd be happy to help in any way needed.
Amp, you mentioned moving the numbers to a 0-100 scale(min-max normalization). It's really easy. We just need everyone's raw numbers.
(xval-min(x))/(max(x)-min(x))*100
Where xval is each player's projected number in each model, min(x) is the smallest projected number in each model, and max(x) is the largest projected number in each model. As other people mentioned, this method captures the fact that the difference between the 1st and 2nd best player isn't always the same as the difference between the 54th and 55th, which is what averaging ranks is essentially assuming.
Really interested to see other Mario H. evaluations because my model, it's not a fan.
Re: APBR-DraftExpress 2015 NBA Draft Project
I certainly do. If the academic literature is good, why try and reinvent the wheel! (You can always try and tweak/test, I guess)Crow wrote:Any indication teams pay attention to the academic literature on this topic?
Re: APBR-DraftExpress 2015 NBA Draft Project
Mario H? I missed this reference - you got a link?nrestifo wrote: Really interested to see other Mario H. evaluations because my model, it's not a fan.
I'm REALLY hoping to have my model ready tomorrow - there is a team that asked to see the results before I publish, but I am by no means expecting that to amount to anything (yet?) - so I expect to do the write up & make the results available here Sunday.
Re: APBR-DraftExpress 2015 NBA Draft Project
Haha I just meant how other models are evaulating Mario Hezonja, but I was too lazy to spell out his name.
Re: APBR-DraftExpress 2015 NBA Draft Project
Who is right or wrong about Booker? Could be a good case to highlight and would be enhanced by feedback from Jonathan and any team insiders willing to speak (or misdirect?).
-
- Posts: 262
- Joined: Sun Nov 23, 2014 6:18 pm
Re: APBR-DraftExpress 2015 NBA Draft Project
Thanks for the input.nrestifo wrote: Amp, you mentioned moving the numbers to a 0-100 scale(min-max normalization). It's really easy. We just need everyone's raw numbers.
(xval-min(x))/(max(x)-min(x))*100
Where xval is each player's projected number in each model, min(x) is the smallest projected number in each model, and max(x) is the largest projected number in each model. As other people mentioned, this method captures the fact that the difference between the 1st and 2nd best player isn't always the same as the difference between the 54th and 55th, which is what averaging ranks is essentially assuming.
Really interested to see other Mario H. evaluations because my model, it's not a fan.
So far, I have seen several people post in favour of an scaled ranking (0-100), but I must say, I'm still not sold (largely because nobody has actually posted an argument against using rankings/why using scaled rankings is superior).
In theory, a scaled ranking is far superior. However, each ranking system has a unique distribution of ratings. Because each system is so disparate, I think it would be flawed to combine/average scores from different rankings that are trying to display different outputs (and are weighted differently). Because of this, I think it would only make sense to show a scaled rating if it were to be equated so all models have a uniform distribution of ratings. If you are going to equate/curve the rankings, then the result would be the same as just using rankings.
Further, I don't seem any huge issues with using a blend of rankings.
While showing distinctions between different ranges in the draft has a purpose, I think it is significantly less important than the strength of the overall rankings. Practically speaking, most fans know there is a gap after the top 3, and have a general understanding of the slope for the rest of the draft.
Re: APBR-DraftExpress 2015 NBA Draft Project
I don't think it's a "huge" issue at all, I just like the scaled results better - I think it reveals more. You are still throwing out top & bottom to help lessen possible wonky results of a model or two in respect to certain players.ampersand5 wrote: Further, I don't seem any huge issues with using a blend of rankings.
While showing distinctions between different ranges in the draft has a purpose, I think it is significantly less important than the strength of the overall rankings. Practically speaking, most fans know there is a gap after the top 3, and have a general understanding of the slope for the rest of the draft.
That being said - couldn't you blend the rankings like you have, but also have a scaled blend average "score" (100 being top obviously if everyone had the same #1 guy) as a column for each player with an explanation of what that is? Why not try it & see how it looks - more info doesn't hurt imo.
Re: APBR-DraftExpress 2015 NBA Draft Project
It's an issue, for sure. If one rating system has a player #1 and another player #10, and a second system has player 1 #40 and player 2 #30... should they average the same? Unlikely, there's a bell curve distribution usually. Player 1 should be rated higher, possibly much higher, in an average of the two.ampersand5 wrote:Thanks for the input.nrestifo wrote: Amp, you mentioned moving the numbers to a 0-100 scale(min-max normalization). It's really easy. We just need everyone's raw numbers.
(xval-min(x))/(max(x)-min(x))*100
Where xval is each player's projected number in each model, min(x) is the smallest projected number in each model, and max(x) is the largest projected number in each model. As other people mentioned, this method captures the fact that the difference between the 1st and 2nd best player isn't always the same as the difference between the 54th and 55th, which is what averaging ranks is essentially assuming.
Really interested to see other Mario H. evaluations because my model, it's not a fan.
So far, I have seen several people post in favour of an scaled ranking (0-100), but I must say, I'm still not sold (largely because nobody has actually posted an argument against using rankings/why using scaled rankings is superior).
In theory, a scaled ranking is far superior. However, each ranking system has a unique distribution of ratings. Because each system is so disparate, I think it would be flawed to combine/average scores from different rankings that are trying to display different outputs (and are weighted differently). Because of this, I think it would only make sense to show a scaled rating if it were to be equated so all models have a uniform distribution of ratings. If you are going to equate/curve the rankings, then the result would be the same as just using rankings.
Further, I don't seem any huge issues with using a blend of rankings.
While showing distinctions between different ranges in the draft has a purpose, I think it is significantly less important than the strength of the overall rankings. Practically speaking, most fans know there is a gap after the top 3, and have a general understanding of the slope for the rest of the draft.
Re: APBR-DraftExpress 2015 NBA Draft Project
Won't a bell-shaped distribution occur naturally, when you have multiple systems ranking the same players?
If a #1 (or 40) ranking is a major outlier, it won't have an undue impact on the avg of (say) 10 systems' rankings. Should it?
If a #1 (or 40) ranking is a major outlier, it won't have an undue impact on the avg of (say) 10 systems' rankings. Should it?
Re: APBR-DraftExpress 2015 NBA Draft Project
Good point; still, it's the statistically rigorous approach (which we should be doing here.Mike G wrote:Won't a bell-shaped distribution occur naturally, when you have multiple systems ranking the same players?
If a #1 (or 40) ranking is a major outlier, it won't have an undue impact on the avg of (say) 10 systems' rankings. Should it?

Re: APBR-DraftExpress 2015 NBA Draft Project
http://www.canishoopus.com/2015/6/5/873 ... d#comments
Layne, my overall impression is that a lot of the over and under rates significantly involve over and under performance on efg%. Usage and ts% would be worth checking too. How defense is rated in college and pros (not that well in many cases) is probably another big part of it.
Layne, my overall impression is that a lot of the over and under rates significantly involve over and under performance on efg%. Usage and ts% would be worth checking too. How defense is rated in college and pros (not that well in many cases) is probably another big part of it.
Re: APBR-DraftExpress 2015 NBA Draft Project
Follow-up submission:
CPR ratings are not a “draft in this order” list. No objective model should be used in that fashion because no model can account for everything that a team must consider when making their selections.
The value in a draft model is highly dependent on how well its results can be interpreted. Any team looking to use a draft model appropriately must have some notion of the primary reasons why certain players rate highly and others do not. Simplicity in a model greatly enhances interpretation.
CPR is simple. It measures how excellent a player’s best performances were in regards to box score metrics. The only caveat is that CPR adjusts based on the player’s year in school. Otherwise, there are no adjustments for the player’s height, his rating out of high school, or the perceived quality of his competition. There aren’t even weights on the statistics themselves. A great performance in rebounds is the same a great performance in steals. All of this allows for an easy interpretation of the results. CPR simply measures excellence.
CPR works. While inconsistent play means season average statistics blur the value of a player, focusing only on the player’s best performances has provided a clear view of the player’s potential. CPR ratings have accurately identified high picks that should have been drafted in the second round, mid first round picks that should have gone in the top 5, and 2nd round picks that should have been drafted in the first.
At the very least, teams should use CPR as a call for more evidence in support of a team’s selection. If a team plans to draft a player that rates particularly low (say below 2.5) in CPR over a player that rates particularly high (say above 7.5), that team should have a good explanation for why the prospect they are choosing never put up the kind of excellent box score production that is consistent with the prospects that turn into excellent NBA players.
This leads nicely to this year’s best example of a player who rates higher by the eye test than CPR. Noticeably missing from CPR’s top 14 is Willie Cauley-Stein. In an NBA where rim protection is far more important than post scoring ability, Cauley-Stein appears to be a gem. His low CPR rating does not preclude that possibility. CPR relies on box score data and much of what Cauley-Stein brings to the game is not recorded in the box score. The real value of CPR in the case of Cauley-Stein is that it raises an interesting question. If Cauley-Stein is going to be a successful pro in the mold of a Tyson Chandler or Joakim Noah, why didn’t this junior put up spectacular numbers at least on occasion in offensive rebounds or blocked shots? There could be very good explanations for this that justify Cauley-Stein’s value and his selection in the top 5 of the NBA draft. CPR doesn’t say don’t draft Cauley-Stein in the lottery. Instead, it calls for an explanation of why he’s worth that pick when his numbers in the college box score were not consistent with the numbers that we have seen for players that have had a high success rate in the NBA.
CPR provides one more piece of information for the decision makers of NBA organizations. It should be taken into consideration in the same way traditional scouting reports, team needs and objectives, position scarcity, player interviews, and injury or character concerns contribute to the team’s eventual selection.
CPR ratings are not a “draft in this order” list. No objective model should be used in that fashion because no model can account for everything that a team must consider when making their selections.
The value in a draft model is highly dependent on how well its results can be interpreted. Any team looking to use a draft model appropriately must have some notion of the primary reasons why certain players rate highly and others do not. Simplicity in a model greatly enhances interpretation.
CPR is simple. It measures how excellent a player’s best performances were in regards to box score metrics. The only caveat is that CPR adjusts based on the player’s year in school. Otherwise, there are no adjustments for the player’s height, his rating out of high school, or the perceived quality of his competition. There aren’t even weights on the statistics themselves. A great performance in rebounds is the same a great performance in steals. All of this allows for an easy interpretation of the results. CPR simply measures excellence.
CPR works. While inconsistent play means season average statistics blur the value of a player, focusing only on the player’s best performances has provided a clear view of the player’s potential. CPR ratings have accurately identified high picks that should have been drafted in the second round, mid first round picks that should have gone in the top 5, and 2nd round picks that should have been drafted in the first.
At the very least, teams should use CPR as a call for more evidence in support of a team’s selection. If a team plans to draft a player that rates particularly low (say below 2.5) in CPR over a player that rates particularly high (say above 7.5), that team should have a good explanation for why the prospect they are choosing never put up the kind of excellent box score production that is consistent with the prospects that turn into excellent NBA players.
This leads nicely to this year’s best example of a player who rates higher by the eye test than CPR. Noticeably missing from CPR’s top 14 is Willie Cauley-Stein. In an NBA where rim protection is far more important than post scoring ability, Cauley-Stein appears to be a gem. His low CPR rating does not preclude that possibility. CPR relies on box score data and much of what Cauley-Stein brings to the game is not recorded in the box score. The real value of CPR in the case of Cauley-Stein is that it raises an interesting question. If Cauley-Stein is going to be a successful pro in the mold of a Tyson Chandler or Joakim Noah, why didn’t this junior put up spectacular numbers at least on occasion in offensive rebounds or blocked shots? There could be very good explanations for this that justify Cauley-Stein’s value and his selection in the top 5 of the NBA draft. CPR doesn’t say don’t draft Cauley-Stein in the lottery. Instead, it calls for an explanation of why he’s worth that pick when his numbers in the college box score were not consistent with the numbers that we have seen for players that have had a high success rate in the NBA.
CPR provides one more piece of information for the decision makers of NBA organizations. It should be taken into consideration in the same way traditional scouting reports, team needs and objectives, position scarcity, player interviews, and injury or character concerns contribute to the team’s eventual selection.
Re: APBR-DraftExpress 2015 NBA Draft Project
Crow... I agree with your impression. eFG% seems to be an impossible nut to crack. How the hell do you predict that Shabazz Muhammad is going to come into the NBA and shoot a considerably higher percentage than he did in college while roughly maintaining usage%? How do you project that Michael Redd, who shoot 32% from the college line is going to make a living as a sniper? So many cases where guys wildly deviate from any reasonable college-based expectations in terms of shooting efficiency.
Re: DX writeups... I am swamped with real world work, but I'll try to get something ASAP.
Re: DX writeups... I am swamped with real world work, but I'll try to get something ASAP.
-
- Posts: 262
- Joined: Sun Nov 23, 2014 6:18 pm
Re: APBR-DraftExpress 2015 NBA Draft Project
This is not necessarily the case. For example, one system could have player 1 rated 99.7 (1) and the 10th ranked player rated 99.3 (10) while a second system has player 1 rated 23 (40) and the other played rated 74 (30).DSMok1 wrote:
It's an issue, for sure. If one rating system has a player #1 and another player #10, and a second system has player 1 #40 and player 2 #30... should they average the same? Unlikely, there's a bell curve distribution usually. Player 1 should be rated higher, possibly much higher, in an average of the two.
This is ultimately my point - we have no idea what the distribution is of the different ratings. By using lots of models, we are trying to find consensus on where a player ranks.
I understand what you were trying to state - in assuming that there is a (potentially) bigger variation in player scores between middle of the pack rankings (30-40) than at the top (1-2).
I find this to be problematic because the models were created to produce a player ranking system, not to create a uniform player score rating. By way of example -
lets look at how two models could rate the top ten players in the draft
Model 1:
1) 100
2) 99
3) 98.5
4) 98
5) 97
6) 96.5
7) 96.4
8) 96
9) 95.8
10) 95.6
Model 2:
1) 100
2) 92
3) 78
4) 55
5) 54
6) 52
7) 50
8) 49.7
9) 47
10) 45
Any model that has a different distribution of player ratings than the rest is going to through off the entire cumulative average for any of the players that do well/poorly in that model.
Because each model has a completely different way of rating players within its own system, I do not think it makes sense to use these comparative ratings in relation to other models.