Bayes-Adjusted Fenwick Close Numbers – An Introduction

With the season upon us, and multiple stat sites now hosting team and player fancystats, it is pretty tempting for a hockey fan (well, one who’s into fancystats) to try and check how his team is doing in possession in close situations – in other words, in Fenwick Close (alternatively, score adjusted fenwick). The problem with this, of course, is that the sample sizes are currently so small as to make the #s pretty meaningless – some teams have played as few as 3 games, so you can’t make any judgments based upon these numbers on their own.

But, as I mentioned on twitter, we can still try and take these numbers and make something out of them, using our prior knowledge of the NHL to make judgments. For example, I can look at current fenwick close #s and pretty confidently state “Buffalo is going to be a terrible terrible team” at this point, despite the sample size, given our prior knowledge of what the Sabres are. In other words, we can incorporate current fenwick close #s into a Bayesian Analysis.


This concept is actually used in a few other sports analyses when doing rankings or evaluations. In college basketball, college football, or actual NFL Football, analytical rankings tend to start the season by adding the current season’s #s to those of the PREVIOUS SEASON (adjusted for changes in team composition between seasons), with the weight being placed on the previous season’s results being very strong at the start and getting weaker as the season goes on. These sports do this because they tend to have very few games – the NFL has 16 games, college football has 12 regular season games, and college bball only has 30, spread out over a bunch of months.

There’s no reason we can’t do the same for the NHL, using last season’s fenwick close #s as our prior. Obviously, like in those other sports, this isn’t perfect – we might have slightly different priors for teams due to changes in team personnel – but it gives us a pretty decent starting point for an objectively-based prior for each team.

Now, how do we weight last season’s results compared to this season’s?  This is where I start making some arbitrary-ish decisions, and thus the below #s are in no way scientific.  But essentially what I did was take last year’s fen close for each team and weight it to start as if it was 25 games worth of data – figuring that at 25 games, we’d be confident enough to use this year’s data already without worrying too much about prior seasons.  Then for each game this year a team has played, we reduced the weight of the prior season by a game.

In other words, as of this moment the Bruins have played 6 games, so we’d weight their #s from 2013-2014 as if the 2013-2014 season was instead 19 games worth of data from THIS year, so our bruins sample contains 6 games of 14-15 data and 19 games of 13-14 data.  This obviously winds up with most of our results being pretty damn close to last year’s (for the Bruins, who’ve played the most games of any team so far, last year’s data is being weighted more than 3x this year’s), but that shouldn’t be a surprise given how weak our signal is from this year’s data when compared to our prior.

If I was trying to be more scientific, I’d weight by total fenwick events, rather than games played, but given how we tend to think of things in terms of games of data, and this weighting being somewhat arbitrary already, I’m going with games instead.

Anyhow, without further ado, the results of this analysis are below.  The right most column is the fen close we’d project with this bayesian analysis (I’ve labeled this as Bayes-Adjusted Fen Close, or BAFC):
BAFC

Click to see the full size chart.

As you can see, the standings mainly line up with what you had last year, but this year’s data still causes a few decent moves.  The Sharks drop from an elite possession team (54.93%) to a simply very good one (52.87%).  Meanwhile the Bolts, and Isles, amongst other teams take steps up.  And the Sabres move into historically bad territory – only 1 team has ever been sub 40%, and that’s where this analysis would have them (a more thorough bayesian analysis, knowing how historically unlikely this is, would probably move them up a few ticks).

I think I’ll be updating this either weekly or every two weeks – it may not be a scientific analysis, but I suspect it does a better job indicating the present state of teams than either using just last year’s data or by using just 4-6 games of this year’s data.

One thought on “Bayes-Adjusted Fenwick Close Numbers – An Introduction

Leave a comment