Goaltender Performance vs Rest

Photo by Michael Miller, via Wikimedia Commons

Photo by Michael Miller, via Wikimedia Commons

I couldn’t find this data (if it’s out there, please point me to it), so I went back to 1987 and pulled goaltender performance vs games rest. We knew goalies did poorly in the second game of a back-to-back pair, but I’m surprised to see such a large gap for two and three games. (The overall dataset is roughly 40000 games.)

Days between Games % of Games Mins (G1) Mins (G2) Shots Vs (G1) Shots Vs (G2) Sv% (G1) Sv% (G2) W% (G1) W% (G2)
1 9.5 54.7 55.0 28.9 29.7 0.905 0.897 0.498 0.421
2 35.6 57.0 56.8 28.7 28.7 0.908 0.901 0.522 0.486
3 19.2 57.1 56.7 29.0 29.0 0.905 0.900 0.514 0.481
4 12.1 56.7 56.3 29.2 28.7 0.899 0.898 0.477 0.487
5 7.2 55.4 55.2 29.0 28.8 0.892 0.899 0.440 0.448

There are lots of systematic issues here (e.g. most back-to-back games are on the road) but simplistically, this would mean goalie rest obscures the bulk of a goaltender’s value. That seems implausible and worth looking at in more detail…

Schedule Adjustment for Counting Stats

Edit:There is another version of this article available in pdf which includes more explicit mathematical formulas and an example worked in gruesome detail.

Rationale

We all know that some games are easier to play than others, and we all make adjustments in our head and in our arguments that make reference to these ideas. Three points out of a possible six on that Californian road-trip are good, considering how good those teams are; putting up 51% possession numbers against Buffalo or Toronto or Ottawa or Colorado just isn’t that impressive considering how those teams normally drive play, or, err, don’t.

These conversations only intensify as the playoffs roll around — really, how good are the Penguins, who put up big numbers in the “obviously” weaker East, compared to Chicago, who are routinely near the top of the “much harder” western conference? How can we compare Pacific teams, of which all save Calgary have respectable possession numbers, with Atlantic teams, who play lots of games against the two weak Ontario teams and the extremely weak Sabres? Continue reading

The Defensive Shell is a good idea in theory. Unfortunately, it doesn’t work.

The results of score effects are pretty basic hockey analytics knowledge at this point.  Teams down in goals tend to take more shots, while teams up tend to take less, with the effect becoming larger as the game goes on.

We often explain this effect by saying teams go into a “defensive shell”, playing extremely conservative on offense to avoid easy opponent scoring opportunities, at the cost of more time in the team’s defensive zone.  It is of course, not a one team effect either – we often emphasize that the other team is taking greater risks as well to try and score, which is why the shots taken by the team with the lead go in at a higher rate than normal.    That said, it’s pretty much accepted that going into a shell would be a losing strategy for a team to attempt over a whole game, which is why teams don’t attempt this strategy for a full game. Continue reading

Draft & Develop: How analytics can be combined with qualitative scouting

NHLe1

The graph above represents how some may look at and use hockey statistics; the better a player performs in a statistic equates to more skill. This practice can be found in league equivalencies -now more commonly known as NHL equivalencies (or NHLe)- originally contrived here by Gabriel Desjardins.

In truth, almost all of us can be guilty of this at one point or another, like when using evidence like “Player A has a better Corsi%; therefore, he is pushes the play better”. Most reasonably understand that this is not how it works, but it is not discussed often enough. These tools are used to show average expected outcomes. The output is not the only possible outcome.  Continue reading

Adjusted Possession Measures

A little while ago I wrote an article at SensStats discussing score effects and suggesting a new formula which we might use to compute score-adjusted Fenwick. This article addresses several interesting questions and new avenues that were suggested to me by various commenters.

  1. The method in the above-linked article simultaneously adjusts for score and for venue (that is, home vs away). It’s interesting to estimate the relative importance of these two factors. As we’ll see, it turns out that adjusting for score effects is dramatically more important than adjusting for venue effects.
  2. We might consider adjusted corsi instead of adjusted fenwick; it turns out that adjusted corsi is a better predictor of future success than adjusted fenwick at all sample sizes.
  3. Most interestingly, we might consider how score effects vary over time, and see if we can create a score-adjusted possession measure that takes this variation into account. We find here that performing such adjustments is indistinguishable in predictivity from the naive score-adjustments already considered.

Several people have pointed out that score effects have a strong time-dependence. At least as far back as 2011, Gabriel Desjardins (@behindthenet) noted the effect and readers with keener memories than me will no doubt remember still earlier examples. Just last week, Fangda Li (@fangdali1) wrote an article arguing that score effects play virtually no role outside of the third period. This article will show that, while score effects are magnified as the game wears on, time-adjustment for possession calculations is not justified. Continue reading

The State of Save Percentage

Image from Wikimedia commons

Currently save percentage is the single best statistic for evaluating goaltenders… which is unfortunate as save percentage is extremely rudimentary and a suboptimal statistic.

There are two important factors for a statistic to be useful: that it impacts wins and the individual can either control or push the needle. Save percentage has both. Continue reading

Bayes-Adjusted Fenwick Close Numbers: Week 4

Mikko Koivu may actually really be the captain of a dominant possession team.

Two more weeks have passed since we last updated our Bayes-Adjusted Fenwick Close (BAFC) Numbers.  This means we now have a lot more data and our BAFC standings are starting to really be affected by this year’s results – significant changes have happened in the last two weeks, and results from this season are starting to be thought of as actually real. Continue reading

Friday Quick Graph: Does puck possession affect penalty differentials?

Screen shot 2014-10-30 at 11.55.42 AM

Using data from War-On-Ice.com, I grabbed the penalty and Corsi differentials for all teams for 5v5 score tied minutes. The whole point was to look at whether or not possession plays a role in a team’s penalty differential.

Above we see a weak but real relationship, with about 6.7% of penalty differentials being explained via possession.

From the regression curve, we estimate the average impact difference between a top and bottom possession team is about 11 penalties drawn per a season for 5v5 score-tied minutes. Of course, there is the opportunity to draw penalties for other team strengths and score situations. (The bottom/top difference is using the 40-60 rule)

Re-examining Fenwick and Playoff Success

Pavel Datsyuk

Pavel Datsyuk and the best Fenwick team in recent history lifted the Cup in 2008

Image from Dan4th Nicholas via Wikimedia Commons

Back in April of 2013, Chris Boyle presented his study of the relationship between a team’s Fenwick percentage in close-score situations and their eventual success in the Stanley Cup playoffs. Since then, there’s been two Stanley Cup playoffs played. Also the previous 2007-08 start point for shot attempt data was extended two years backwards thanks to War on Ice. All told, it’s an another four seasons of data added to the five Boyle examined.

Worth another look, in my opinion.

Continue reading

Bayes-Adjusted Fenwick Close Numbers: Week 2

Adding John Scott may have added a needed goal scorer to the Sharks, but their possession numbers are falling fast.

Another week, another week of possession data in the NHL.  Thankfully, while last week several teams had only played 3 games, this week the minimum # of games played is 6, so our minimum sample size for our Bayes-Adjusted Fenwick Close #s (BAFC) is now 6 games of data.   And with more data, we’re starting to see the numbers for this year have a greater impact on the possession rankings so far.

In case you missed our introduction to BAFC last week, BAFC is simply taking last year’s possession numbers and, weighting them by # of games played this year (more games, less importance), combining them with this year’s possession numbers to try and come up with a more predictive estimate of what each team’s true talent fenwick close really is.  It’s far from perfect, and indeed, the weighting formula is definitely arbitrary, but it does paint a nice picture that is not prone to overreacting to small samples. Continue reading