Predicting Which Players Will Succeed on the Powerplay

Embed from Getty Images

Alexander Semin did not have a good season last year. After producing decent numbers in his first two seasons in Carolina, with 35 goals and 51 assists in 109 games, Semin struggled in 2014-2015, putting up only 19 points over 57 games and seeing his shooting percentage drop below 10% for only the second time in his 10 year career. With three years remaining on a contract paying $7MM per season, the Hurricanes decided to cut their losses, buying out the Russian winger prior to the start of the UFA period in July.

While at first glance Semin’s release seems like a reasonable response for a former top scorer who appeared to have lost the magic touch, if we look at little closer at Semin’s numbers a different story beings to emerge. Semin logged only 1.5 minutes of powerplay time per game in 2014-2015, down more than 2 minutes from his 2013-2014 total, and well below the 4+ minutes he would see at the start of his career in Washington. While other factors certainly played a role in his fall from grace (a 97.5 PDO at 5-on-5 doesn’t help), there’s no denying that the coaching staff’s decision to keep Semin off the ice when the ‘Canes were up a man cost him (and likely the team) points.

Although Semin is an extreme case, the general story of a player losing points as his powerplay time decreases is not uncommon amongst NHLers, and illustrates that opportunity often matters just as much as ability when it comes to a player’s results. Each team’s powerplay minutes are limited, and valuable to both the team and player, given the higher scoring environment that exists when a team is up a skater. Overall, teams scored roughly 25% of their goals on the powerplay last year, despite the fact that less than 20% of total ice time was played with a team on the man-advantage.

Continue reading

Sbisa, the Sens, and the Scramble: Evaluating Defensive Play Following a Shot Attempt

Embed from Getty Images

Luca Sbisa may be one of the players who best epitomizes the divide between the old-school, eye test view on hockey and the statistics-focussed analysts offering their opinions from their mother’s basements on fan curated sites across the internet. While GM Jim Benning clearly thinks Sbisa is a useful defender, rewarding him with a 3-year, 10.8MM deal, and consistently praising his defensive zone smarts, Canucks fans have been less bullish on the talents of the 25-year-old Swiss pointman. Correctly noting his less than stellar possession numbers, J.D. Burke commented that his first season with Vancouver featured few “extended stretches in which any pairing with Sbisa on it looked passable”. These aren’t just the criticisms of a bitter fan wishful for better years, Burke backed up his arguments with a detailed numerical breakdown of Sbisa’s many failings, and video evidence of some of his less than professional defending from 2014-2015. Burke, and the Canucks’ fanbase in general, seemed to paint a picture of Sbisa that stands in stark contrast to what Vancouver management observed. Where the fans saw a player who frequently found himself out of position at critical junctures when defending his own end, Vancouver’s brain trust viewed Sbisa as the ideal player to disrupt a cycle down low. How could two groups of people who watched the same games with such intense devotion come to such different conclusions?

One of the biggest difficulties with evaluating Sbisa, and defencemen in general, is that what the eye test says is important is often wildly out of sync with what statistics can currently measure. While stats-based analyses focus on a defender’s ability to prevent shot attempts (in other words, their Corsi Against per 60), most of the praise for defensively-minded defencemen tends to focus on hockey IQ, being in the right position, and winning battles in the corner. While ideally these less “quantifiable” skills should lead to favourable statistical results, issues with differences in player deployment and the teammate-dependent nature of defending often mean that what gets praised in post-game interviews isn’t what shows up on the scoresheets, leaving a divide between management’s view and the story told by pure shot attempt numbers.

Continue reading

The relationship between competition and observed results is real and it’s spectacular

Raw Comp Impact

Abstract

There has been much work over the years looking at the impact of competition on player performance in the NHL. Prompted by Garret Hohl’s recent look at the topic, I wanted to look at little deeper at the obvious linear relationship between Quality of Competition and observed performance.

The results are a mathematical relationship between competition and observed, which could provide insight into player performance over short time frames. In the long run, the conclusions drawn by Eric Tulsky still hold. The impacts of facing normally distributed Quality of Competition (QoC) will wash out the effects over time. But this should not preclude consideration and even adjustments for QoC when looking at smaller sample sizes.

Continue reading

A New Way To Measure Deployment – Expected Faceoff Goal Differential

Zone starts are not that great of a metric. Although certain players do tend to be put out almost exclusively for offensive or defensive purposes, the reality is that for most players’ zone starts have a relatively small effect on a player’s performance. And yet, many hockey writers still frequently qualify a player’s performance based on observations like “they played sheltered minutes” or “they take the tough draws in the defensive zone”. Part of the problem is that we’ve never really developed a good way of quantifying a player’s deployment. With many current metrics, such as both traditional and true zone starts, it’s difficult to express their effect except in a relative sense (i.e. by comparing zone starts between players). So when a pundit says that a player had 48% of his on-ice faceoffs in the offensive zone, it’s difficult to communicate to most people what that really means.

Going beyond that, even if we know that 48% would make a player one of the most sheltered skaters in the league, the question that we should ask is so what? Simply knowing that a player played tough minutes doesn’t give us any information that’s useful to adjust a player’s observed results, which is really the reason that we care about zone starts. We know that if you start your shifts predominantly in the defensive zone, you’ll likely see worse results, but zone start percentages don’t tell us how much worse they should be. Traditional deployment metrics are too blunt of a tool – they provide a measurement, but not one that gives any context to the performance numbers that we really care about.

Continue reading

Why teams should use 4 forwards on the powerplay

A few days ago, James Mirtle of the Globe and Mail brought up one of the first significant shifts in tactics under the Mike Babcock regime in Toronto.

While the change may be surprising to some fans, particularly given the lack of depth in the Leafs forward corps, it shouldn’t be altogether unexpected.

Continue reading

Prospect Cohort Success – Evaluation of Results

2008 NHL Entry Draft Stage.JPG
2008 NHL Entry Draft Stage” by Alexander Laney. Licensed under CC BY-SA 3.0 via Commons.

Identifying future NHLers is critical to building a successful NHL team. However, with a global talent pool that spans dozens of leagues worldwide,  drafting is also one of the most challenging aspects of managing an NHL team. In the past, teams have relied heavily on their scouts, hoping to eek out a competitive advantaging by employing those who can see what other scouts miss. Quite a challenge for many scouts that may only be able to watch a prospect a handful of times in a season. While there has been some progress in the past few years with teams incorporating data into their overall decision making, from the outside, the incorporation of data driven decision making in prospect evaluation has been minimal.

To address this, Josh Weissbock and myself have developed a tool for evaluating prospect potential which we call Prospect Cohort Success (PCS), with the help of others in the analytics community including Hockey Graphs Supreme Leader, Garret Hohl.

Continue reading

Rate Metrics Matter

The other day, @Moneypuck_ and @SteveBurtch had a conversation about the Prospect Cohort Success Model:

While the PCS model is interesting in its own right, I found the discussion about the methods we use to analyze players to be interesting as well.

Continue reading

Why Possession and Zone Entries Matter: Two Quick Charts

As some of you know, the NHL tracked offensive zone time for two seasons, 2000-01 and 2001-02, then inexplicably stopped. As some of you also know, I have a lot of historical game data, and that includes all the zone time from these seasons. Taking those performances, and focusing on the first two periods to avoid any major score effects (or “protecting the lead“), I charted every single game alongside 2pS%, the historical possession metric.

It’s pretty clear that the spread in shots-for in these games was quite a bit greater than the spread in zone times. Curious, I decided to do a distribution plot, the one that you see leading this piece (2pS% and offensive zone time % in the x-axis, percentage of total performances in the y-axis). Zone time, or generally speaking the flow of the game, has a tighter, much more normal distribution that the distribution of shots. What does this mean? This means that things like how you enter the zone (zone entries), and how you control the puck in the zone (possession, or passing) can make a pretty big difference in how you generate scoring opportunities.

Note: The data I used for these quick graphs were from home team’s perspective, hence why our distribution was a bit north of 50. Keeping that in mind, the 60-40 Rule we established here a year ago looks pretty good for assessing game flow, but there are ways within that flow that can tip the scale.

Sunday Quick Graph: Distribution of EA NHL Player Overall Ratings, from NHLPA Hockey ’93 to NHL 05

Out of curiosity, and having access to some of the data, I decided I could chart the distribution of player overall ratings in the EA NHL series in its first decade of existence (the first of the series and NHL 99 being the exception). Knowing full well that, by 2005, there was a popular gripe that “anybody could get a 70 overall rating,” it seemed like it would be fun to see how we arrived to that point. As you can see, the ’93 version was remarkable in its near-even distribution; most famously, Tampa Lightning defenseman Shawn Chambers received an overall rating of 1. The subsequent games never attempted a similar approach; there were marked divergences for the ’96 and ’04 versions, the latter essentially bringing us to the place where it seems anyone can get a 70 rating. I’d be interested hear your comments suggesting theories and/or evidence why we saw this kind of movement.

At this point I’m inclined to say, as an NHLPA-approved product, it probably wasn’t enjoyable for the players to have low ratings, and thus have that opinion of them reflected to thousands of young fans. More importantly, those fans probably didn’t get much of a kick out of playing with poorer players (playing against them, on the other hand…). I’d also guess that, when you are rating a player’s numerous attributes, it’s hard to end up with a 1 overall unless you had negative values (which they didn’t) or a very low weighting for multiple attributes (which they mostly didn’t).

Why would I even bother looking at this anyway? Well, for two reasons. One, after boxcar statistics (goals, assists, points) and +/-, video game ratings were really the next attempt to derive a publicly-consumed statistic for player talent and value. Whole generations observed, and potentially internalized, the way these games conceptualized important and unimportant elements of the game. Understanding hockey should be as much an understanding of society as it is an understanding of the technical components of the game.

Postscript: I plan on breaking down this data in a more complex fashion in future posts, so stay tuned…

Postscript II: Best theory I’ve seen so far, from Reddit user “DavidPuddy666” — that the inclusion of CHL and other leagues raised this bar. For the most part, though, I recall the international rosters and European leagues following these distributions. In other words, you didn’t have a bunch of sub-50 overalls buried on international rosters. The European leagues were even worse for this; top players in Euro leagues are still rated as if they would be top NHL players. As for the CHL leagues and the AHL, Puddy might have a point — but the AHL didn’t appear till NHL 08, and the CHL leagues till NHL 11. In fact, the international teams theory also has this chronological issue, as only the best international teams make their appearance first in NHL 97, before an additional 16 international teams are added for NHL 98.

How Did Bucci Do? Revisiting John Buccigross’s Alex Ovechkin Goals Projection

Photo by

Photo by “Photonerd23” via Wikimedia Commons

In February of the 2009-10 season, John Buccigross of ESPN was spurred by a mailbag question to do a quick thought experiment: does he think Ovechkin could set the all-time goals mark? Gabe Desjardins voiced skepticism of Bucci’s optimistic projection but didn’t offer a counter-projection, presumably because, as he wrote:

Basically, careers are incredibly unpredictable – nobody plays 82 games a year from age 20 to age 40. And players who play at a very high level at a young age tend to not sustain that level of play until they’re 40…So, to answer the reader’s question: I believe that there is presently no significant likelihood that Alex Ovechkin finishes his career with 894 goals. He needs to display an uncommon level of durability for the next decade, and not just lead the league in goal-scoring, but do so by such a wide margin that he scores as much as Gretzky, Hull or Lemieux did in an era with vastly higher offensive levels.

That said, I thought it would be fun, with five full years gone, to see how Bucci did, and try to build a prediction model with the same data he had available. Continue reading