Unearned runs, created when a pitcher gives up a run that was caused by defensive errors, do not effect the pitcher's ERA. But what effects do unearned runs have, and is there anything significant about a pitcher who has a higher percentage of unearned runs?
The Effect of Unearned Runs on Pitchers
Saturday February 12th, 2011
For the average starting pitcher, about 8% of the runs they give up are unearned. The player at the 25th percentile is 5%, the median player is 7.5%, and the player at the 75th percentile is 10.6%. The man with the highest percent of unearned runs is at 25.2%, while the lowest is 0% (six players with only unearned runs). Thus, we can see that there is a slight tendency for lower-than-average numbers, as the median is 0.5% lower than the mean.
There is a slight negative correlation between unearned run percentage (UERP) and ERA. It is rather weak at just -.182, with a range variation (R2) of .033, which shows that only 3.3% of the variation of UERP is explained by one's ERA. However, the negative relationship is still statistically significant. Essentially, for every whole point that ERA increases, UERP decreases by 0.9%. Not a huge change, but considering that half of the UERPs are between 5% and 10.6%, it is noticeable.
Adding walks and hits per inning pitched (WHIP) as an independent variable makes for a stronger change. It would make sense that one who gives up more walks and hits would have more opportunities to give up unearned runs with men on base. Some unearned runs are caused more by the pitcher than others, such as any run given up after what would have been a third out, but reached due to an error.
The positive relationship between WHIP and UERP brings the correlation to a total of .313 and the R2 up to 9.8%. For every whole point of WHIP, the UERP will increase by .123. ERA now is also now a more significant contributor than before, with each point of ERA decreasing UERP by 2.8%. However, when the relationships are standardized, ERA has a more significant effect on UERP than WHIP, at -.552 compared to .449.
This is because ERA numbers tend to have a larger range than WHIP numbers. The 25th to 75th percentile range for ERA from 2000-2010 was 3.52-4.59, while the same range for WHIP was 1.22-1.42. It may appear that WHIP has a stronger relationship with UERP than ERA does, but that is only because a whole point of WHIP is much bigger of a jump than a whole point of ERA, given that ERA is multiplied by nine to indicate the earned runs given up over a full nine inning game.
Strangely, however, WHIP alone shows to have no relationship to UERP. It is only when both WHIP and ERA are in the regression that such a relationship appears. This speaks for the strong positive relationship between WHIP and ERA, but it also shows that, when ERA is accounted for, there is a relationship between UERP and WHIP.
Losses further contribute to this relationship, bringing the correlation to .365 and the R2 to 13.3%. Each loss a pitcher suffers increases their UERP by 0.3%. Given that the range of losses is greater than that of WHIP or ERA, this number is larger than it first appears. Overall, it is still only about a third as influential as ERA, and half as influential as WHIP.
Thus, it's clear that even with WHIP, ERA, and losses accounted for, we are only explaining 13.3% of the variation in UERP. This is enough to show that these variables have some significance, but it also goes to show that there are other factors influencing it as well.
Comments are closed.