Non-metric based player evaluation (& hidden eval.)

Home for all your discussion of basketball statistical analysis.
Post Reply
Crow
Posts: 5695
Joined: Thu Apr 14, 2011 11:10 pm

Non-metric based player evaluation (& hidden eval.)

Post by Crow » Sun Nov 12, 2017 4:48 am

Metric based player evaluations get criticized regularly; and both specific evaluations and the method should be critiqued. Many metrics have gaps, questionable weights & power dynamics and team adjustments. But it shouldn't stop there. A number of team insiders, past & present and media analysts critique one or all metric based approaches and then defacto appear or in reality lean on non-metric evaluation or at least non-systematic, holistic evaluation or a hidden, unevaluated systematic method. This seems to be a weaker, vaguer approach. I've said this before but I am going to say it at least one time more.

It may be helpful to say on the one hand this, on the hand that, on yet another hands x, y and z and so on until you have many, many unsummed partials. Especially if the audience is not inclined to do their own gathering, doesn't know where to look or how to get the not readily available. Most people though can do most of this if they try. There may be better and worse at ad-hoc summation of all these partials and you could evaluate their performance by the specific summary evaluations & recommendations over time and many cases. But that is not easy or clear cut and it is slow. But you can't directly evaluate the roll-up of the parts because it is ad-hoc or not clearly stated and may not be consistent.

So the next time someone criticizes RPM or whatever, it may be appropriate to think or ask "what are you doing instead and why is it better?" If you don't think the popular or emerging new metrics are good enough, how about making and sharing a better one? Trust me and my undocumented approach is a pretty weak response or implicit, non-response / reality. Maybe some people can show that kind of faith in certain individuals. But not me. If they are that good, they should be able to document the method and allow full evaluation of it. They may not want to, that I can understand, unless I were paying them. If I was, then I'd really prefer that. Some may not even really pretend to have documentation and just have an asserted but not proven "better" method. Fwiw.

If any want to defend non-systematic roll-up player evaluation (by ANY systematic means of your choice) as superior, please share your rationale. There is a difference between defending further thinking after a systematic roll-up and not doing a systematic roll-up, I think. I don't have a big problem with the after step, though it depends on how far you are willing to jump away from the numeric analysis and why. Some leaping beyond may be vital, but it should be recognized for what it is. Not doing a systematic roll-up seems far less defensible. Why not at least try your best at that, especially if you still have post systematic roll-up thinking space & wiggle-room?

Will any of the non-systematic roll-up practitioners identify themselves as such and explain their case for taking this approach? If you listen to what is said and not said you may be able to identify them for yourself, if you care to, if you have to. There are some that appear to try to have it both ways. They critique metrics and the arrogance of metrics but when pressed if they are practicing without them, might scurry back to acknowledging using them, though often it is unidentified. It may be more common to identify (and maybe frown upon) this behavior amongst coaches, team execs or media not labeled as advanced analytic practitioners but it exists within the advanced analytic circle as well.

Looking at many metrics and not being a rigid partisan of one is a credible approach. I try to survey, be open-minded. But then if pushed for an answer, I'll sort, include / exclude, weight, blend, get an answer review it, revise it, finalize it. System as far you can take it, then a bit farther.

If you say you don't believe in player evaluation in isolation, that's fine. But are you practicing player in context evaluation systematically or in an ad-hoc, undefined way? That question remains is central and deserves an answer. It may be sorta one, sorta the other. But know what you do, say it, stand by it. And critique will go to each style, not just one.

There will be insiders that don't have a truly systematic main method or won't fully document it as long as employers allow it, don't care enough about method & documentation. In the public, current & former insiders of this persuasion offer less in these areas than those who reveal evaluations or methods (though they can & will do what makes sense for them.) They'll still get a lot of the panel, podcast and article attention but the yield to the audience is lower, often straight out low. Praise to the presenters who actually reveal sine or more about their evaluations and methods. That is what is mist worth paying attention to, usually early in their careers.

Post Reply