When going through the final rankings there were several interesting things that only show up when the data is viewed holistically. Here are some of our big findings that didn’t make it into the rankings piece.
1. GMs Aren’t As Bad Or Good As They Seem (Usually)
The standard deviation of the overall ranking is 0.48, with a mean of 3.0 (which follows since this was a 1-5 scale). That means only four GMs made it into the “extremely good” section of the rankings, and just seven in the “extremely bad.” And two of those seven have already been relieved of duties.
This means two-thirds of the GMs ranked fall within the “normal” parameters, and eighteen are above our “average” rating. Not to mention, there is some extremely tight clustering of several GMs.
No. 9 (Bowman) to No. 14 (McPhee) is all within 0.1 points. No. 15 (Hextall) to No. 17 (Wilson) is all within 0.015 points. Gorton (No. 19) missed the 3.0 mark by just 0.06 points.
As we mentioned in Part 1, we hope the big takeaway here is that the spread between GMs is very small. They each have areas of strength, and areas of weakness.
Much like player skill sets, it might be worthwhile for owners to pin down what they think they need the most in a GM when hiring, then ensure that GM builds a staff that can help compensate for their weaknesses.
Just like winning on the ice, managing a successful front office is a team effort.
2. Yes, We Disagreed (But Not Much)
On the chart above, the “Delta” is where our ratings disagreed. The darker blue a number is, the more Carolyn liked him, the darker orange, the more Chris favored them. Our Standard Deviation, though, was 0.23, meaning usually we were within a quarter of a point of each other.
This is good, as it shows that we likely used similar, logic-based approaches to our ratings.
Carolyn’s big deltas were with John Chayka and Ray Shero, showing she likely gives a lot of the benefit of the doubt to rebuilding clubs. Of course, she’s known to be hard on Dean Lombardi, which was Chris’ biggest delta.
Chris was also much higher on Doug Armstrong, which was a bit surprising for a confessed Blackhawks fan.
Most interestingly, they had perfect, or nearly perfect, agreement on Bob Murray, Stan Bowman, David Poile, Brian MacLellan, Marc Bergevin, George McPhee, Brad Treliving, Tim Murray, and Jim Benning. These GMs crossed the spectrum of very good to very bad, so it doesn’t appear to be any easier to identify one than the other.
3. Hockey Graphs Survey Results
Yzerman, Poile, Cheveldayoff
Benning, Sweeney, Lombardi
Best Player Development:
Yzerman, Poile, Bob Murray
Worst Player Development:
Tim Murray, Sakic, Bergevin, Benning
Best with Extensions:
Worst with Extensions:
Benning, Bowman, Lombardi
Best with NCAA and College FAs:
Best with UFAs:
Worst with UFAs:
Sweeney, Sakic, Chiarelli, Benning
“This guy is a genius but also an idiot” Votes
Stan Bowman – Managing the Cap (19% best/38% worst)
4. Surprise, Surprise
One of the biggest surprises was just how hard this exercise was. It’s easy to do a quick power ranking, even a statistical one, because all the information is generally in one place. But reviewing four years of deals and drafts across multiple sites, and then having to weight one trade or extension against another to come up with a single number rating?
It was tougher than expected, and it forced us to confront some of our own biases. But it was also fun, because we learned a bunch in the process.
As far as outcomes are concerned, Poile was a surprising No. 1. Good money would have had it on Rutherford (most recent Cup champion) or potentially Yzerman (king of extensions), and while they did score well, Poile was still head and shoulders above.
Joe Sakic was another surprise, coming in above the “extremely bad” line, even after heading up a historically bad year. While the weight of his GM history is, well, not good, his recent moves likely pushed him up out of the danger zone.
However, (according to our Twitter feeds) the most contentious ranking was that of Kevin Cheveldayoff, the GM of the Winnipeg Jets (which we saw coming from a mile away). The Jets have broken the hearts of their fans and prognosticators who don’t understand why their on-ice talent doesn’t translate to wins and playoff appearances. (*Cough*, *Ondrej Pavalec*, *Cough*).
Nevertheless, even before picking Calder finalist Patrik Laine with the No. 2 pick last year, Chevy hit on all 5 of his previous first round picks without picking in the top 5 (Scheifele, Trouba, Morrissey, Ehlers, Connor) and has built a potential powerhouse in one of the league’s smallest markets. Whether he deserves to see it through is another question altogether.
5. Interesting Miscellany
Of all the categories, Drafting had the highest average rating by far, coming in at 3.32. This is good, as it was also one of our most important categories.
Unsurprisingly, UFA Signings had the lowest average rating, at 2.63. This means that your average GM does a poor job getting value in the free agent market. That underscores just how important Drafting and Development are to building a good team.
One concerning average would be that of Extensions, which came in at 2.85. While close to average, it is under that 3.0 score, indicating there is a significant weakness in identifying the proper value for players currently on a roster.
One other important thing to note was that the timeframe we chose (post 2013 CBA signing) did dramatically alter some GMs rankings. Some of Chuck Fletcher’s best drafts came from 2010-2012, where he picked up Mikael Granlund, Jason Zucker, Jonas Brodin, and Matt Dumba. Even Tyler Groavac, while not an impact player, was taken in the 7th round of 2011, and has racked up 57 games. Few 7th round picks can claim even one.
And of course, the timeframe means that Lombardi’s most famous trade, you know the one we’re talking about, wasn’t included. Even Carolyn would have to (grudgingly) admit that worked out in his favor.
As such, it became clear that public opinions of GMs can be extremely fluid, as few people would have ever considered Ken Holland a bottom five GM in 2008. You either retire a hero, or sign enough bad extensions to see yourself become a villain.
All in all, this was a beast of a project. Hopefully, our methodology made sense, and our evaluations didn’t feel ridiculous. While it remains impossible to remove subjectivity from a project like this, it’s still a worthwhile endeavor.
And who knows… in 2018, this list could look completely different.