Water Into Wine

Last week ESPN released the final group of results from their #NBArank project. This is just one product of the ESPN Forecast initiative which converts the expertise of over 200 different basketball minds into incredibly accurate collective predictions and projections.

#NBArank is a little different than most of the other ESPN Forecast projects because the results are standardized into a format that is disconnected from most other basketball statistics and measurements. The task here is not to project team win totals or the likely length and outcome of a playoff series. Instead panelists assign each player a rating between 0-10. That simple scale is incredibly accessible to basketball fans of all sorts, but if we want to connect the group assessment to actual results on the floor, measuring accuracy in any way, it’s a much more complicated product than, say, projecting points per game or PER.

However, one simple way to begin drawing connections between the players ratings from #NBArank and different statistics is by running correlations. One of the statistics which correlates most strongly with the player ratings from #NBArank is Basketball-Reference’s all-encompassing metric, Win Shares. If we use this year’s ratings and last year’s Win Shares (eliminating a few players like Danny Granger, Derrick Rose and Kevin Love who missed significant time with injury) we find a correlation of 0.860.

What this shows is that the current ratings have a very strong relationship with last year’s production. It also raises the interesting potential of using the #NBArank player ratings to make team projections for the upcoming season, converting projected Win Shares into actual wins.

To do that we need to examine the relationship in the other direction — instead of looking at how this year’s ratings reflect last year’s results we would need to look at how last year’s ratings reflected the ensuing results from last season. Using the same group of players (eliminating that handful of stars who missed the bulk of the season with injury), and a polynomial curve, which fit the data better, we return an r^2 of 0.638. This means that about 63.8% of the variation in a player’s total Win Shares could be mathematically explained by the variation in their #NBArank player rating.

The total Win Shares earned by the players on a team has, historically, had a very strong relationship with the actual number of games that team wins. Going back to the 1962-1963 season the average absolute difference between team wins and Win Shares is 2.74. So while, they won’t be perfect, we can create reasonable team win projections by converting the #NBArank player ratings into Win Shares and then adding them up for each team.

Just to make sure that this process worked and created results that were in the ballpark, I used it to project win totals for each team for last season and then compared them to each team’s actual win total.


You can see the relationship between the projection and actual totals is reasonably strong and fairly consistent with what we see with the individual player projections. The biggest outlier on the graph above is the Los Angeles Lakers who would have been projected to win 63 games with this method and last year’s #NBArank player ratings, but only managed to win 45.

Taking that method and applying it to this season’s #NBArank player ratings, these are the projections for team win totals that we end up with:

Screen Shot 2013-10-28 at 4.51.43 PM

Keep in mind that the average error from converting Win Shares to actual wins was 2.74, so each of those totals could go up or down by 2-3 wins and still fall within that range of normal error.

One of the things that jumps out is how much this process seems to bunch win projections around the middle. Only three teams are projected to have fewer than 30 wins (there were eight such teams last season) and only five teams are projected to have more than 50 wins (there were seven such teams last season). But this bunching might be caused by a similar effect in #NBArank itself. Last season I did some analysis with the top 30 players in #NBArank, and compared the progression in their ratings to their performance in metrics like Win Shares, Wins Produced and VORP. As you can see in the graph below, the 0-10 #NBArank scale created a lot less separation among the players than we see in different comprehensive player statistics.


Less separation among the players would lead to less separation among the teams, hence the bunching effect.

While this method of projecting team wins sees a lot more parity than we typically find by the end of the season, there’s nothing that looks wildly different than what most statistical systems and pundits, including ESPN Forecast, have projected. Contrary to what you may have heard on the internet, reasonableness reigns in #NBArank!

Ian Levy

Ian Levy (@HickoryHigh) is a Senior NBA Editor for FanSided and the Editor-in-Chief of the Hardwood Paroxysm Basketball Network.