I’ve had a couple of people ask about how to use the new interactive visualizations we offer at Hockey Graphs, so I thought I’d take the time to provide a tutorial with some visual demonstrations.
I’ve been asked by a couple of people how a team with a normal PDO and strong metrics could have missed the playoffs entirely. It’s an important question to address, particularly because the playoffs are so much more important than worrying about whether you’re lucky enough to win the Stanley Cup. I composed an email response, and felt good enough about it to open it up. While this doesn’t comprise the whole of the explanation (certainly, there’s some “blame” that goes to Calgary & Winnipeg), they’re points that I’m not seeing made elsewhere.
A couple of things really hurt the Kings. One is a cruel fact of a low-scoring league: if more games are going to be decided by one or two goals, it increases the likelihood that a fluky goal can impact a team in the standings. The Kings had the most overtime losses in the Western Conference; last year they were tied for the second least in the West. The second thing is the tank battle…the West had two teams with historically bad records – add in games against Buffalo, and we have three teams that will end the season with point totals that were typically reserved for the sole worst team in the league in other seasons. On the flip side, that creates a rising tide for all the other ships in the league, and raises the bar for getting into the playoffs. I mean, needing to get nearly 100 points to get in? Last year, the bottom team in the West, Dallas, had 91 points. A nearly identical record to this year got Los Angeles into the playoffs in the 8th seed in 2011-12.
Maybe the closest comparable circumstance was 2010-11, when the West again had two sad-sack teams (Colorado, Edmonton), and the East was noticeably weaker than the West. It took Chicago 97 points to get in. Also, look at 2006-07…Colorado didn’t make it with 95 points, having gone 44-31-7 during the season. If the West is considerably stronger than the East, as it was back then, you could also end up with a tougher path to making the playoffs. In ’06-07, every team in the Western Conference, save the 8th seed (Calgary, with 96 points), had 104 points or more!
Anyway, this year’s league created a scenario where a good team, by any measure, might not get in. The Kings went 39-27-15, outscored their opponents by 12 goals (in fact, they tied for 2nd in the league in goal differential at even strength), and could get 95 points and not make the playoffs. In the loser point era, there were only two seasons that was even possible, and both occurred in the stronger Western Conference. It’s a successful season by anything except the fluid marker of the playoffs, which unfortunately for them is all-important to reach.
Hope this helps,
Note: One critique I’d like to address – yes, all teams in the league are theoretically dealing with the tank battle, but tanking doesn’t occur across the entire season, which means that teams that have already played most or all of their games against tanking teams earlier in the year won’t have the benefit. Additionally, those same teams might have the resulting, added pressure of a more-difficult set of opponents through the latter portion of the season. If the difference between making the playoffs versus not is a matter of a few points, the difference in scheduling can become all the difference in the world.
Right out the gates, I knew two things: 1) I wanted to take TOI% data from close scores and subtract it from TOI% data from 2+ goal leads, and 2) that it would automatically tell us that perceived poorer players are given more playing time with the lead. Why? Because they tend to play less when the score is close, which increases the likelihood that a differential with 2+ goal time on-ice will show they get to play more with a big lead. That said, I wanted to run a quick study to see just how much of a difference that time swing could be, and which players come out of the woodwork on either end.
But first, I want to whittle away the small sample players, and to do that I’m going to run a quick test to see at what # of games played this TOI 2+ minus TOI Close differential (let’s call it “TOI Lead Diff”) stabilizes.
What do you do when a 6’4″ QMJHL forward who scored 184 points in 66 games in his last underage season scores at a 282-point pace in his draft year? You tank — you tank as hard as you can. In the latter half of the 1983-84 season, the Pittsburgh Penguins and New Jersey Devils were in an unspoken, pitched battle for the bottom of the league and everybody knew it. While the Penguins would ultimately win out, sputtering to a 16-58-6 record (“good” for 38 points in the standings) to New Jersey’s 17-56-7 (41 points), the two teams were coming from distinctly different franchise backgrounds.
Using information from our new interactive charts, we can see what set these teams apart, and led them to take different paths in what turned out to be a pretty wild race to the cellar of the NHL.
While tanking is a hot topic in this year’s NHL, the act of tanking is as old as the idea of granting the worst teams a shot at the #1 pick in the draft. Case in-point: the 1983-84 Pittsburgh Penguins, routinely considered the most overt of tankers in NHL history. The graph above is just one example of their tank, and man is that bad. The yellow and grey lines indicate one standard deviation above and below league-average historical possession (using 2-Period Shot Percentage, or 2pS%, explained here). The blue line is a 20-game moving average (the orange is cumulative), and you’re seeing that right; a team close to the middle of the pack dropped nearly two standard deviations, or from near the top to near the bottom of the league. That graph, and all the ones below, are just some examples of the kind of tinkering you can do with our new interactive graphs, which I highly recommend you check out.
This is part-opportunity to finally explore this question, and part-opportunity to tout some existing and upcoming data visualizations for HG. Travis Yost has been following the absolutely terrible Sabres season all year, and has raised some questions about whether it’s an all-time worst team. He’s only been able to reach back to the admittedly bad early 2000s Atlanta Thrashers, but the historically bad team by which all others need to be measured is the 1974-75 Washington Capitals squad. Using an historical metric like 2pS%, or a team’s share of all on-ice shots-for in the first 2 periods (expressed as a percentage), we can bring the 2014-15 Sabres together with the 74-75 Caps to see where both teams stand. Note: I used the cumulative version of the measure below, and added lines for one standard deviation below league-average in both seasons.
For as bad as Buffalo has been, they haven’t quite matched the futility of the 74-75 Capitals…nor should they. The Capitals were an expansion team that year, and unlike in other years the NHL did not really reach out to ensure the expansion teams in 1974-75 were given a good base to build from. These were also the peak years of the World Hockey Association, which made professional level talent even more diffuse than normal. The other expansion team in 74-75, the Kansas City Scouts, lasted two years before moving to Colorado to become the Rockies (the team subsequently moved to New Jersey in 1982-83 and changed their name to the Devils).
I included the standard deviations for the leagues in 1974-75 and 2013-14 (I haven’t compiled the data for 2014-15 yet, but this should be close enough), and even by those markers the Capitals compared markedly worse to their league than did the Sabres. But once again, the Capitals had a reasonable excuse, while the Sabres have walked into this situation with eyes wide open.
For those interested, I also put together 2-period shots-for and shot-against rates (and stretched them out to per 60 minutes) to get a rough sense of offense-versus-defense for both teams.
I added a couple extra filters to the charts, league-averages and standard deviations as well as 20-game moving averages in all the measures I used, which you can select by clicking on the grey “Team” bars and clicking on “Filter.”
Sort of a mid-week quick graph…I’ve been compiling data for a different project and curiosity got the best of me to see what the spread in team shooting percentages was in NHL history. We all know that shooting percentage in the NHL went up substantially during the 1980s, but what you’re seeing above is one of the reasons why we theorize that shot quality and team shooting talent might have figured more greatly in outcomes in the 1980s than it does today. With some exceptions, the standard deviation seems to have settled from about 1996-97 to the present at just under 1%, which suggests our expectations from one year to the next should only allow a team that much of a bump above or below league-average. It’s worth noting that sample will affect this measure, hence why our line is so spiky during the Original Six era, and why 1994-95 and 2012-13 might have not been as characteristic of a trend. Incidentally, this is shooting percentage for all situations.
Note: As mentioned by a reader, increased scoring is going to work together with this standard deviation to accentuate the differences between teams. League-wide, the shooting percentage and standard deviation move well enough together to cause this effect, usually portrayed by coefficient of variance, to regress heavily from 1965 to the present. The exceptions, though muted, would be the early 1980s and the more recent years of Dead Puck, so the standard deviation fairly accurately represents our variance above. CoV data:
Embedding interactive graphs into blog posts, especially blogs with a narrow runner like ours, is frequently an awkward process. Just about the time things look good, you tinker with it and it looks bad. Nevertheless, I had a bunch of old data I put together, once upon a time, and I wanted to get it out there in a form that you could tinker with. Basically, in the past I have used the percentage of team shots in the games a player participated (%TSh; explanation here) as a way to capture a player’s contribution to the shot load; I also think it strongly implies a player’s involvement and contribution to team offense overall.
In the case of today’s graph, I took %TSh and looked at aging curves with a multitude of players from 1967-68 through 2012-13 (like I said, the data is a little old). I prepared this with a selected group of players available for the filter, the majority of whom are stronger, more familiar players of the years covered. I also included some players that struggled by the metric, for the sake of comparison. To filter, click on the “Name” bar, click on “Filter,” and let your imaginations run wild. Feel free to download if you wish.
Note: I believe I set the cut-off at 20 GP before I would record the point of data. It’s old. I’m old. We’re all getting older.
Building on my post from last week on overall skater height going back to 1917-18, I wanted to dig a little further into the the complexity of the data to see if there were any interesting takeaways. This included breaking the data into forward and defense data, to see if there was every any substantial increase in defenseman size or any other allusions to an attitude change in terms of size trends and preferences. While there are some slight differences, most interesting to me was, for as many changes as the NHL has undergone, there seems to be a uniform attitude about size when looking at forwards and defensemen.
As any person interested in hockey stats should do, I’ve been gradually building my own personal database of player information that I can use when Y3K robs my future post-human self of cloud data for 3 seconds. To that end, player size wasn’t a huge priority but I knew eventually I’d want to have it, if only to think about how normal-sized I’d be in the 1920s NHL. In the process of bringing in all that data, I decided to do a little demographic work on player height and weight. We all know the players are bigger now than they were before, but by how much? And is there greater variance in size now or in the past?