Box Score Analysis: Wins by Differential, Part 2

First order of business: congratulations, Boston Celtics, on your 17th title. Consolations to the Lakers as well. And a preliminary prediction of a Spurs championship next year if Manu stays healthy – it’s an odd year, after all.

In case you can’t tell, I’m a huge fan of differentials. It’s fairly well-documented that the best predictor of playoff success over the past several years hasn’t been regular season record, but regular season differential.

Want proof? Just look at this year. In 2008, almost every team was eliminated by a team with a higher differential. Celtics over Pistons (#1 and #2), Lakers over Jazz (#3 and #4), Pistons over Magic (#2 and #5), Jazz over Rockets (#4 and #9), Hornets over Mavs (#6 and #10), and the list goes on. The only exceptions were the Spurs over the Suns and Hornets (and as we all know, the Spurs don’t really care about the regular season) and the Cavs over the Raptors. Differential matters.

Want more proof? Highest differential in 2005 and 2007 were both the Spurs, despite not owning the NBA’s best record in either season. In fact, half of the last 10 NBA champions have won the point differential crown, with only the 2002 and 2004 Lakers winning the title without ranking at least top-5.

So, all this considered, it’s a pretty powerful statistic. It correlates ridiculously well with total regular season wins, but it’s an even better predictor of playoff success, considering how rarely (via Ball Don’t Lie) regular season victories predict championships.

It’s also an extremely versatile statistic. You don’t have to consider things like pace of the game, the team’s focus (offensive or defensive), efficiency ratings or anything else. It’s very straight-forward, and everything else that’s significant in the game comes through in the differential.

That’s why I’m utilizing (ok, abusing) it so much in this on-going analysis; it’s one of the more powerful statistics, and we’re reaffirming that here. We’ve already uncovered a few quite notable discoveries, but the ones in this entry are better than all the rest. I think in this entry, we’re actually going to bridge the gap between ‘interesting to statistics nerds’ (like myself) and ‘interesting to the casual NBA fan’.

Today we’ll be answering the question, is it more important for a team to play well in the first half or second, and is there a particular quarter that it’s more important for a team to perform well in?

Before we continue, let me define real quick what I’m referring to when I say one differential was ‘more beneficial’ than the other. Essentially, if a positive differential (for the home team, meaning they outscored the away team) during quarter (or half) A led to more wins than the same differential over quarter B, then quarter A is defined as ‘more beneficial’ (meaning performing better in that quarter more closely led to more wins). If, on the other hand, a negative differential during quarter A led to more wins than the same differential in quarter B, quarter A is considered less beneficial because being outscored during that portion of the game was less likely to result in a loss.

This approach isn’t without fault, though, and I’d be a propagandist (woah, that’s actually a word?) instead of a statistician if I didn’t mention them. This study assumes that winning a quarter is good, and losing a quarter is bad. The last portion of the study didn’t entirely support this, showing that it could be that losing a quarter by two or less is still good – but, this would only confound a tiny percent of our data. That’s the only obvious confound I can identify; if anyone reading notices something I’ve overlooked, e-mail me – I’m not defending this idea simply because I believe it, but because the statistics appear to back it up.

Fortunately, most other typical confounds don’t really apply here: usually a study like this would need to ensure the four quarters were played out under equal conditions, but this study does not seek to attribute causation: we’re not trying to find out why a particular quarter is more important, only if a particular quarter is more important. So with that, on to the results:

Difference Between Halves

Is it more important to perform well in the first half, or the second half? There are several ways to approach this, but one of the best would be to look at identical performances in the first and second half (here, identical differentials) and find if the same differential leads to more wins in one half than the other. And, in this case, it does – well, sort of.

For this portion of the study, I actually calculated the statistical significance for the difference between win percentages between the first and second halves for every possible differential. OpenOffice.org Calc is my friend.

What we find is that there are 39 differentials which occurred at some point in the season in each half. For 21 of these differentials, it was better to win the second half by that score than the first (for example, winning the second half by 8 led to an 86% winning percentage, whereas winning the first half by 8 led to only a 64% winning percentage). For the other 18, it was better to win the first half; so right here we see no strong evidence that the second half is more important.

But, this goes to another level when taking statistical significance into account. Of those 39 differentials, the difference in winning percentage between the first and second halves is only statistically significant (at the 90% confidence level) for 14 of them – meaning that only 14 of the differences are actually indicative of a relevant discrepancy. And of those 14, 11 come when the second half yields more wins.

This is about the strongest statistical evidence I’ve yet come across that strong second half performance results in more wins than identical first half performance. When the statistics show that a certain differential is more beneficial when achieved in a certain half, it shows that the second half is the more beneficial one 11 out of the 14 times. I can’t really re-state that in any other way; the stats strongly suggest that the second half is more important. I’m sure the Lakers will agree, having seen both sides of just how much the second half matters (Game 1 vs. the Spurs, Game 4 vs. the Celtics).

The study does raise several questions though. One oddity is that all 14 statistically-significant differences don’t lead to the same half winning. Why do three appear to benefit the first half, but the other eleven appear to benefit the second half? Is there a pattern to which lie on which side?

Let’s find out: the differentials that are more beneficial in the second half are -20, -17, -16, -14, -9, -3, 1, 7, 8, 9 and 18. The differentials that are more beneficial in the first half are -13, 10 and 19. Now, those results are rather unusual – the beneficial first-half differentials all lie right next to a differential that’s more beneficial encountered in the second half. For example, every home team (all 20) carrying a 19-point advantage into halftime won, whereas only eight out of eleven home teams pulling in a 19-point advantage in the second half won. On the flip side, every home team that achieved an 18-point advantage in the second half won, whereas only eleven out of fourteen won after doing the same in the first half. The same sorts of trends (though not as drastic) can be observed for 9 vs. 10-point differentials and -13 vs. -14-point differentials.

That’s a very strange observation indeed, and requires some explanation. There are two high-level possibilities:

  • There are certain differentials that are more beneficial when encountered in the first half, and certain ones that are more beneficial in the second half.
  • There is an overall trend to which half is it better to encounter every differential in, and the differentials pointing towards the first half being more beneficial are a result of sampling error.

The first of those options would be more plausible if there was a pattern to which differentials were more beneficial when – for example, if large differentials were better when encountered in the second half and narrow ones better in the first half. But the results don’t suggest that – the beneficial first-half differentials are right next to the beneficial second-half differentials.

So, that leaves us with the second option. The second option seems a bit like a cop-out – just throw out the results that we don’t like? And I admit, it is – statistically it’s a pretty slim chance that three of the fourteen would come up just by chance (somewhere around a 2% chance). But it’s an even slimmer chance that the difference would be as high as it is (11 vs. 3), and there remains no logical explanation for why such close differentials would yield completely opposite results.

Fortunately, we can drop this here – why? Because like I mentioned in the last post, we can re-hash this analysis when we multiply our sample size. With a sample size of 12,300, every difference will be statistically relevant – so if these results persist into that study, we can conclude that somehow, for some inexplicable reason, an 18-point differential is better in the second half, but a 19-point one is better in the first.

Difference between Quarters

Like the halves study, this was done by finding the statistical difference between every pair of quarters – all six pairs. And since we already set up the framework for how we’re doing these analyses, let’s jump right into the results – all ‘statistically significant’ stats are at least at the 90% confidence level.

Rather than get unnecessarily wordy, here’s the format for the stats: (Quarter): (number of differentials with advantage); (Quarter): (number of differentials with advantage); (Quarter) SS: (number of statistically significant advantages); (Quarter) SS: (number of statistically significant advantages) – followed by a brief summary of what I’m taking away from it. Like above, below we’re assuming that one of the quarters is absolutely better (over all differentials) than the other – this is not a conclusively proven assumption, nor can it be at this stage, but there’s no evidence (logical or statistical) to the contrary.

1st Quarter vs. 2nd Quarter: 1st: 19; 2nd: 15; 1st SS: 3; 2nd SS: 2
No real conclusive evidence that the first quarter is more important than the second, though this is fairly conclusive that the second quarter is not more important than the first.

1st Quarter vs. 3rd Quarter: 1st: 19; 3th: 17; 1st SS: 2; 3th SS: 8
Now, this is interesting. The 1st quarter holds a slight advantage over the third in terms of the winning percentage resulting from different differentials; but, only 2 of the 19 first-quarter-favoring differentials is statistically significant. On the other hand, almost half of the third-quarter-favoring differentials are statistically significant. This suggests a notable lean towards the third quarter, but not a completely conclusive one.

1st Quarter vs. 4th Quarter: 1st: 16; 4th: 19; 1st SS: 2; 4th SS: 4
Like the 1st vs. 2nd data, this data is too close to suggest that the fourth quarter is conclusively more important than the first; however, like in the second, this is evidence that the first quarter almost certainly is not more important than the fourth.

2nd Quarter vs. 3rd Quarter: 2nd: 13; 3rd: 21; 2nd SS: 3; 3rd SS: 4
The wide discrepancy in non-statistically significant differentials is not as notable as the closeness of the statistically significant differentials, but it’s still statistically significant that the non-statistically significant differentials are so far apart (once again, statistics analyzing statistics – and don’t you dare try to read that sentence 3 times fast). So what does that mean in non-statistics-ese? Basically, there’s evidence that the third might be a bit more critical than the 2nd, but more notably the second certainly isn’t more critical than the third.

2nd Quarter vs. 4th Quarter: 2nd: 16; 4th: 18; 2nd SS: 2; 4th SS: 7
Fourth is more important than second, basically – the large discrepancy in statistically-significant differentials shows that.

3rd Quarter vs. 4th Quarter: 3rd: 18; 4th: 19; 3rd SS: 8; 4th SS: 9
And the third and fourth are about as equal as they can be from the data available – either could actually be better than the other, or they could be functionally the same.

Now, pardon me, but I’m going to be a nerd for a moment and break these into nonsensical looking equations to try to come up with a Unified Theory of Quarter Differentials.

So, we have 1st >= 2nd, 3rd > 1st, 4th >= 1st, 3rd > 2nd, 4th > 2nd, 3rd = 4th.

Well, right away that’s good news (well, ‘good’ if you want this study to have a conclusion): there are no blatant contradictions there. That’s actually better news than it may appear – if there truly was no pattern to which quarter was better than which (and all the observed results were simply random), there would almost certainly be a contradiction.

The conclusion is not, however, what I had anticipated. Judging from earlier data, I was fairly sure that the third quarter would prove to be conclusively most important. However, according to this data, it isn’t: the unified formula suggested by the data is 4th = 3rd > 1st >= 2nd – that is, the third and fourth quarters are definitely more important than the first and second, but are themselves even, and the first and second may also be even.

Now, these statistics allow a certain degree of interpretation – for example, how do you resolve 4th >= 1st, 1st >= 2nd but 4th >2nd? The nature of this discrepancy – that is, two comparisons suggesting a logical order but not a logical degree, is not an uncommon characteristic of this type of analysis’s random error. There is likely a resolution (or, alternatively, there is no pattern and these results are, in fact, by chance), but unfortunately the only way to establish that resolution is to increase the sample size – that, again, is a task for later in the summer.

I could milk this some more, but let’s cut this portion off here – we have one more thing to touch on before we turn the page on this portion of the analysis.

Overall Differences

Before ending this portion of the analysis, let’s look at one last approach. This approach is not as specific or thorough as the above ideas, which actually grants an advantage: while the following approach would not be able to pick up on subtle differences between different quarters and halves, it can be assumed that any notable difference in the following approach is indicative of a true difference.

This approach is simple: what was the overall winning percentage of teams that “won” each quarter and half? A simple compilation of the gargantuan dataset we already had revealed these results (note, the numbers do not add to 1230 due to tied quarters that hypothetically favor neither team):

  • 1st Quarter Winners: 756 wins, 405 losses, 65.1%
  • 2nd Quarter Winners: 745 wins, 429 losses, 63.4%
  • 3rd Quarter Winners: 786 wins, 387 losses, 67.0%
  • 4th Quarter Winners: 766 wins, 396 losses, 65.9%
  • 1st Half Winners: 855 wins, 323 losses, 72.5%
  • 2nd Half Winners: 889 wins, 304 losses, 74.5%

And lastly comes that epic question of statistical significance. Which of these differences are statistically significant? I’ll spare the math and just jump to the conclusion; factoring everything in, these statistics show a statistically significant advantage in the comparisons of 1st > 2nd, 3rd > 1st, 3rd > 2nd, and 4th > 2nd (it’s notable that though all are significant at nearly the 90% level, the 3rd > 2nd and 4th > 2nd are the most significant). Additionally, the data suggests that 4th > 1st and 3rd > 4th, though not at nearly certain confidence levels (66% and 72%, respectively).

So, considering only these conclusions, we would come up with this demented equation: 3rd >= 4th > 1st > 2nd. Don’t get too excited by the absence of a contradiction with our earlier formula – while usually a conclusion is nearly-certainly proven when two separate approaches lead to the same result, these two approaches aren’t entirely different – they’re based on the same data, so it’s natural for them to be closely related.

What’s notable, though, is that there is a larger degree of certainty of the relationships when using this method. Whereas the earlier method failed to make a determination between the 3rd and 4th quarter, this one suggests a possible benefit for the 3rd quarter (and certainly shows no benefit for the 4th quarter – equality is still possible, however). To a greater degree, this approach marks a more certain definition for the comparison of the 1st and 2nd – namely, that the 1st is, indeed, more important.

So which do we accept? That’s a surprisingly subjective question for a statistical issue. Statistically, we could take either of the two studies, or a combination of the two – but they do show slightly different things (‘Little White Statistics’), so the question becomes where we attribute the random error. This effect can be minimized by expanding the study size, but in the meantime it’s a matter of judgment – I, personally, feel random error will affect the first method more due to its more segmented nature – the sample sizes for each individual comparison are smaller (giving random error a larger impact), and there are many, many more samples to be affected. Therefore, I believe the conclusion of this latter portion to be more accurate.

Statistically, the proper way to say this would be something along the lines of “we can be 90% confident of the latter conclusion (3rd >= 4th > 1st > 2nd) and 95% confident of the former conclusion (3rd = 4th > 1st >= 2nd)”. Looking carefully, we see that the latter conclusion is really a specific case of the former conclusion (noting that ’3rd = 4th’ means that we can’t make a judgment, not that we’re judging them to be equal) – or, in other words, both the former and latter conclusion can be correct, but the latter can’t be correct without the former. One of those ‘all squares are rectangles but not all rectangles are squares’ types of situations.

So, I’m accepting the latter for the time being, but we’ll definitely revisit this portion of the study when we analyze the past 10 years – at a sample size of 12,300, nearly everything is statistically significant, allowing us to more definitively define these trends.

But wait! What about a comparison of the halves? Comparing the halves allows us to say with 86% confidence that the second half is more important (under the definition we’ve repeated several times) – but, it’s a commonly accepted notion that at least a 90% confidence level is required (many places a 95% confidence level) to make a conclusion, so alas, we have no conclusion. If the same ratio holds up when looking at the last 10 years, we’ll have an incredibly statistically significant conclusion, so we’ll see then. But verily this vichyssoise of verbiage veers most verbose, so let me move on to the Little White Takeaways.

LITTLE WHITE TAKEAWAYS


In this installment, we looked at whether or not certain quarters (or halves) were more beneficial to perform well in – or, in other words, which quarters correlated best with a team’s success – or, in even simpler terms, which quarters are most important. And in our analysis, we arrived at the following conclusions:

  • The 2nd half is likely to be more important than the 1st half, but the statistical evidence is not yet definitive (though it’s about as close as it can be without being considered definitive).
  • There is a “pecking order” of which quarter is most important to perform well in. Generally, this order is of the form 3rd ? 4th > 1st >= 2nd (or, the 1st is better than or as good as the 2nd, and the 3rd and 4th are indistinguishable both better than then 1st and 2nd).
  • Specifically, that same order can be narrowed down a bit, down to 3rd >= 4th > 1st > 2nd (or, the 1st is better than the 2nd, the 3rd and 4th are better than the 1st, and the 3rd may be better than the 4th). This doesn’t contradict the above idea, it’s just a special case of it – we can be very confident that the above is true, and slightly less confident that this one is true.
  • All these conclusions will become much more set-in-stone when we perform this analysis on every box score over the past 10 years (rather than just this season). Some of you may wonder about the rule changes over the past 10 years affecting the results – but the good news is that the differential statistic should not be affected with regards to wins. There’s no reason to believe that the rule changes resulted in a certain quarter becoming more important (although never fear, just in case we’ll run some basic tests to see if somehow they did).
  • So what’s next? Well, there’s three items of business on tap for the near-future. First, there are still many angles of this analysis to consider – most notably, do certain teams perform better in certain quarters, and do the elite teams perform better in a particular quarter? Second of all, like I said in the first post, we want to conduct that analysis on Kobe Bryant and test if his team’s success really does go down as his shot volume increases, as well as whether that’s due to his choices or other causes (teammates’ off-nights forcing him to shoot more). And thirdly, I’d like to run a few of these tests we’ve been doing the past few days on just NBA playoff or NBA finals games, to see if the statistics change.

    So what comes first? Probably the first (on teams and differentials), but after that we’re likely headed for a break from this study for a few weeks so we can focus on Kobe Bryant. Yes, I know it would’ve been smarter to analyze Kobe during the Finals when all eyes were on him. I’m a blogger, not a businessman.

One Response to “Box Score Analysis: Wins by Differential, Part 2”

  1. Ap Says:

    Wow, as a person who has studied a bit of stats, this is really great stuff. I love what a little analysis can add to one’s understanding of the game.

    Keep up the great work!

Leave a Reply