Time to switch focus to defense. This presented the tricky task of finding a catch-all defensive stat without major flaws.
Steals, Blocks, Defensive Rebounds, and their associated rates all prove too simplistic and skew towards various positions.
Defensive Rating over credits team defense; i.e. the poorest defenders on the elite defensive teams typically rate as superior to the best players on the lesser teams. This ruled out the defensive win shares built from it.
Opponent’s PER can provide cover to players where their team hides them on defense; when Ben Gordon rated as an elite shooting guard, it seemed non-worthwhile proceeding there.
On-Court / Off-Court numbers fluctuate based on a player’s back-up as well as the quality of teammates and opponents…they do build the foundation for where I headed though.
Adjusted plus minus (APM) intrigued me since becoming aware of it; the number attains a sensibly complex simplicity for evaluating players. Many things happen on a basketball court that are not capably measured in a box score; in order to ascertain who contributes to victory best, why not run a massive regression with every player as a variable and every line-up matchup as an equation? Unfortunately, quick online searches only find APM data appearing from 2006 – 2007 to the present, and the numbers are not split into offense and defense. Also, the numbers are statistically noisy over sample sizes even as large as one season.
Occasionally throughout the 2012 – 2013 Pro Basketball Prospectus, regularized adjusted plus minus (RAPM) gained mention, most notably as it related to leading the field in a group of other stats-based projection systems during the 2011 – 2012 season. Given my mild inclination towards APM, I was intrigued. Scrolling through the vast wall of numbers available here, the defensive data matched my understanding of individual performance better than anything else prior. I became a big fan, developed a number that met the needs of this project, and now we will wander into the murky world of plus-minus stats.
If you just want to know which players rated best at defense, skip the next few paragraphs. If you want to read about an arbitrary creation of a defensive metric…carry on. The number derived can loosely be defined as “points stopped compared to a 30-win player”. Thirty wins became my threshold, which equates to 36.6% wins. From there, with “A” equal to the NBA’s average offensive rating for a season, I solved the equation:
0.366 = R^14 / (A^14 + R^14), where “R” equals the 30-win player (this equation format is common to calculating expected win percentages, while using a team’s defensive and offensive ratings).
The “R” was always 4 to 4.2 points less than “A”, which when halved, accounted for the defensive value of the “30 win player”. So, a “30 win player” allowed two more points per 100 possessions than an average player. With 2 as an example “R” value, the “points stopped compared to a 30-win player” calculates as:
Total Possessions / 200 * (Defensive RAPM per 100 Possessions + 2)
With all of that said, the point of this series is not to advocate for a defensive metric. Basically, this number gives players credit for the quality of their defense, but also for how much they played. This was important, as the Offensive Win Shares provided similar context, and in both cases I was trying to avoid overvaluing players that were great at one end, but couldn’t get on the court because of their atrocities at the other. Finally, the number was primarily contrived so that the vast majority of players provided positive values, but that some small percentage offered negative results. This is also similar to OWS, with approximately 15% of the players rating sub-zero. Finally, the rating does not compare players solely to their position, but instead a composite 30-win player.
So, take that for what you will; the defensive parts of this series build upon it. The obtained results generally matched universal understanding of who played strong or weak defense. When surprises arose, I reached out to my fellow bloggers for confirmation or refutation (Special thanks to Hickory High, 3 Shades of Blue, The Two Man Game, Queen City Hoops, The Brooklyn Game, Daily Thunder, and Warriors World). The definite majority provided a response along the lines of “Sure, that’s a reasonable enough result.”
At the end of the analysis and correspondence, I decided the number performed satisfactorily enough to write about.
Today, I will not delve into the relevance of draft measurements as it relates to defense. Instead, this article ends with three “All Kevin’s Summer Project Defensive Teams” and a little discussion about one surprise. From that, your opinion of using RAPM may grow or fade. Remember, these only represent seasons meeting the following screens:
- Drafted from 2000 to 2010
- First four seasons post-draft
- Only includes the 2000 – 2001 through 2010 – 2011 seasons.
- Player was from NCAA or a High School Kid (pre-2004)
- That participated in pre-draft measurement combines
Per position, the most quantitatively productive defensive seasons revealed by this study were:
PG – Chris Paul, 2008 – 2009, 194 points better than a 30-win player.
SG – Dwyane Wade, 2005 – 2006, 158 points better.
SF – Andre Iguodala, 2007 – 2008, 250 points better.
PF – Josh Smith, 2007 – 2008, 393 points better.
C – Dwight Howard, 2007 – 2008, 438 points better
PG – Mike Conley, 2010 – 2011, 127 points better than a 30-win player
SG – Ronnie Brewer, 2008 – 2009, 138 points better
SF – Kevin Durant, 2009 – 2010, 248 points better
PF – Josh Smith, 2006 – 2007, 262 points better
C – Dwight Howard, 2006 – 2007, 365 points better
PG – Kirk Hinrich, 2005 – 2006, 121 points better than a 30-win player
SG – Monta Ellis, 2008 – 2009, 121 points better
SF – Andre Iguodala, 2005 – 2006, 232 points better
PF – Chris Bosh, 2004 – 2005, 259 points better
C – Jason Collins, 2004 – 2005, 359 points better
Generally speaking, these results met expectations. Monta Ellis serves as the primary ‘red flag’. This lead to three conclusions. First, as expected, the guards offered the least impressive defensive seasons. Much of the group did not ascert themselves strongly here during the early years of their career. The tenth-best season was only 80 points better than “30 win player”, or less than one point per NBA game. Players qualitatively much better than Monta Ellis, lacked in quantity; one season, Tony Allen provided 94 points of benefit in 2600 defensive possessions, compared to Ellis’s total in 6400 . Per possession, in his “best” defensive seasons, Ellis was near average; he benefits from playing a lot.
This leads to conclusion number two. I need to adjust for pace. Golden State easily played faster than every other team in 2006 – 2007. While my method avoids over-valuing players that rarely found the court over four seasons (but managed to produce a highly efficient 300 minute campaign), it allows guys whose teams played at blistering speed to benefit. Factoring for pace, 2007 – 2008 Ronnie Brewer sneaks into the third shooting guard spot, supplanting Ellis (note to hardwoord paroxysm: I have some re-work to do…could be three weeks again).
Finally, Monta’s good defensive seasons were the final two years of his rookie contract. RAPM considered him a “30 win defender” his rookie season. In his second and third campaigns, when he earned $660 and $770,000, his defense rated better, at nearly league average per possession. As soon as the ink dried on his 6 year, $66 million extension, the credit received from RAPM dropped towards “30 win defender” status again; whether this exact scenario actually occurred, at least it is a narrative I can believe. Playing for a contract happens, right?
It’s time to wrap this up. To my knowledge, a perfect defensive catch-all does not exist; RAPM work bests. With some minor modifications, I developed a number that meets this Series’ needs. Come back next time, for an assessment of which pre-draft measurements most impacted point guard defense.