Re:
https://www.intraocular.net/posts
The boxscore prior for bigs for defense varies more than the pure APM because one uses individual data and the other uses team data where the big is probably not primary on 70% of defensive play end points.
Making boxscore priors relative to position is a subjective intervention that contradicts the boxscore stats plainly stated.
If the boxscore stats show a big difference, there is a big difference.
Average impacts are not equal across positions, but let's make them that way.
Rate players relative to position if you want to. That is different that rating all players equally.
Position adjusted ratinngs may well predict team performance more closely as teams still generally adhere or mostly adher to traditional positions across most lineups and on average.
Position adjusted Baynesian Box Adjusted Plus Minus has more in common with BPM than non-positionally adjusted metrics.
Compare position adjusted ratings to other position adjusted ratings and neutral / equal non-positionally adjusted ratings to similar ratings. And know your purpose.
A player is the player and their role. Theoretically you can divide those pieces. But the player you see, is the whole player as used.
I have called for RAPM by position in past. It will never be perfect but the splits by estimated position would be worth seeing and considering. By play by play estimates based on size and typical use or actual video proof of matchup at time of play end. I assume Baynesian Box Adjusted Plus Minus was using position from roster listings with match issues to actual play by play use since no proof of anything more discrete was indicated. Such assumptions would be distorting and not recognize it.
Everything out there has flaws. So study everything, know differences, know flaws, use some combination or blend. And don't expect or pretend to have perfection. Be as informed as possible using stats & metrics and try to make that work better than less informed. Use less than all if you want. Ultimately judge the use of information array / results of that use more than the methods and datapoints themselves. Applied analytics over "the analytics". The analytics are a base but I spend most of my time trying to apply them, knowing they are imperfect but thinking I can think better with them (at least on average). Draft pick evaluation. Other player acquisitions & disposals. Lineup, minute and shot distribution decisions. Matchup decisions. Applied analytics.
I don't "know" how hard authors of analytics use their own products (including producers of simple analytic graphs & lists) and those of others privately but there is often not much public paper trail evidence of extensive application. Value comes in application. Extensive application. Novel or speculative application. Layered application. Application with a level of error.
It is unclear to me how much time Coaches and top front office people spend in direct exposure with "analytics" or applied analytics reports and recommendations and how much weight it is given. It is happening and more than in past but how much? Are most decisions substantially based on analytics and "advanced" analytics or only sometimes?
Draft, free agency, trades have a mixed appearance of use and weight. Lineup management ahows generally show low signs of applied analytics. Shot selection and distribution certainly could use more.
What % of total "analytics staff" time is spent on "analytics" from conception to production, maintenance and dissemination versus "applied analytics" and presentation & advocacy? Don't hear much about time allocation, thru put, results. Occasional cases but usually brief mentions, not details of what was considered, dismissed / ignored, the level of opportunity to make a case and fight for it.
I don't know the level of restriction and legal threat involved in non-disclosure agreements; but relative speaking, little of the reality of team decision-making in the analytics era has been exposed. I guess a new book and some past books could be reviewed closer. Not much comes out at Sloan or in articles / podcasts / tweets. Very little if anything has ever been posted here. The trickle is far from "enough" for accurate understanding of the internal climate & typical operation.
Recommend a lineup's initial use, greater use, lesser use how often and to what reaction if any? What level of use of analytics based staff recommendations "should" be considered, debated, implemented, reviewed? Only as much as Coach wants / allows or the amount that GM says? Or the anount the Analytics Director recommends, figgts for, or signed on for? What are the standards for analytics and applic analytics use in every area? What is the experience?
You hear of a few lineups recommended / used, but what is the overall openess to using input? I heard a recent case of a key staffer recommending a lineup, that was used and to positive effect. Then I checked the data and found that total use for season remained very minor.