Distribution of Quality of Competition and Teammates Metrics

Embed from Getty Images

The analysis community has studied these metrics in various ways. The purpose of this post is to lay out the way I understand the metrics, and identify areas of additional research.

The effects of competition and teammates on players are not new concepts in hockey.  We hear about it all the time in analysis and conversation: “Jonathan Toews is deployed by his coach to specifically shut down the top players of the opposition”,  “4th liners play against the opposing 4th line”, “Sidney Crosby makes his teammates better”, etc. etc.

Having analyzed the metrics used to quantify quality of competition and teammate, I came to two conclusions.

The first takeaway is that using current metrics, I did not find evidence that coaches can choose the quality of competition their players face over a full season of play.

The second takeaway is that quality of teammate effects are observable in a full season sample size.  We can see differences in the quality of players’ teammates.

The cause of these phenomena is simple.  The opportunity for coaches to pair teammates is simply much more common than the opportunity to linematch players against specific opponents. For example, in 2014-15, Crosby’s most common opponent forward was Dainius Zubrus at 35:38 minutes.

Now compare that to the opportunity for coaches to pair teammates together. For the entire season, Crosby’s most common teammate forward was Chris Kunitz at 578:24.  The quality of opponents a player faces are too widely distributed for any quality of competition effects to shine through in a full season.

Examining the specific metrics further:

Each dot is a player-season. The x-axis is jittered to highlight the distribution.

Corsi Quality of Competition % (CorC%) is tightly distributed around the mean (.4 standard deviation). Over the course of a season, the effects of playing against tightly distributed levels of competition wash out.

Corsi Quality of Teammate % (CorT%) is significantly less tightly distributed around the mean (2.5 standard deviation). This indicates that the effects of a player’s teammates do not wash out over the course of a season. There is observable variance in the quality of teammate metric that we use.

This pattern holds in the metrics that use TOI as the basis of measurement.

Each dot is a player-season. The x-axis is jittered to highlight the distribution.

The competition metric is tightly distributed around the mean (.3 standard deviation).  There is more variance in the TOI-based metrics, but the pattern holds.

The teammate metric is spread out, indicating higher variance (2.5 standard deviation).

Is this to say that quality of competition does not matter? No. Garret Hohl showed that the effect of quality of competition can be seen by splitting it into bins.  In a single game or a playoff series, quality of competition can probably be said to have a greater impact than it would over a full season.

On the other hand, in a small sample size, many factors become more important than they would in a full season. To single out quality of competition in a small sample size and exclude the variety of other variables that would become more prominent is a mistake.

A more fundamental question is whether these metrics accurately capture what people mean when they say “quality of competition” or “quality of teammate”. Perhaps there is an opportunity to improve our metrics in this area.

I have identified some areas for future research regarding these metrics:

  • Should we use different metrics for forwards vs. defensemen?
    • Forwards and defensemen are deployed differently. A defensive pair can possibly play with (or face) more than one line of forwards during a single shift. Would that dilute the reliability of the metrics?
    • Defensemen can also play with their other defense partner much more than any teammate. Inversely, forwards can play with many more players than defensemen.
  • Are there patterns worth investigating in home vs. away situations?
    • Does quality of competition manifest itself more obviously when coaches have the last change?
  • Does time on ice have some effect?
    • Do players that simply play more face harder competition in those extra minutes? Or do the extra minutes dilute any effects further?
  • Case studies of in-season changes in the metrics could yield interesting results. For example, when Crosby and Malkin had season-ending injuries in 2010-11, did Kris Letang’s quality of teammate drop precipitously?

The Literature:

http://nhlnumbers.com/2012/7/23/the-importance-of-quality-of-competition

https://hockey-graphs.com/2015/10/08/why-linemate-and-competition-metrics-may-not-be-as-simple-as-we-think/

https://hockey-graphs.com/2015/10/19/the-relationship-between-competition-and-observed-results-is-real-and-its-spectacular/

Follow me on Twitter @Null_HHockey

One thought on “Distribution of Quality of Competition and Teammates Metrics

  1. I’ll argue that a goals based QoC is the better than either Corsi QoC or TOI QoC as sample size issues with goal data largely goes away when averaging over hundreds of opponents.

    “Does quality of competition manifest itself more obviously when coaches have the last change?”

    I have looked at this in the past. The answer is a little, but not that much. It also manifests itself more when looking at goals based QoC (which is what coaches would be line matching on). I just grabbed 5v5 Home/Road data from stats.hockeyanalysis.com for players who played >300 Home or Road minutes last season. Standard Deviation of OppGF% at home is 0.83. Standard Deviation on the road is 0.74. Slightly less but not much a huge difference. (For OppCF% the numbers are almost identical: 0.48 vs 0.47).

    For forwards it is 0.80 and 0.73 while for defensemen it is 0.89 and 0.76 indicating coaches try harder to get matchups with defensemen than forwards.

    All that said, QoC is probably not worth worrying much about over larger samples.

Leave a comment