What to Do About Earned Run Averages, Part 1
The following is an article I wrote in 1995 about Earned Run Averages, the correlation between earned runs and fielding percentages, and how to calculate ERAs for players who played in years prior to ERAs appearing in baseball guides.
This will be a four-part series, first beginning explaining the problem, followed, tomorrow, by how I stumbled upon a solution, and what the solution was. In the third part I will go into depth with proof that the system works.
Finally, I will expalin what I have learned since 1995, and post a new, revised chart, and conclude with what I am planning to bring this study to an end.
By Carlos Bauer
How many times have you sat down with one of the Minor League Stars volumes, or with The Minor League Register, and wondered about what such and such pitcher’s ERA might have been while staring at the almost blank column underneath the heading ERA? If you’re like me, it’s got to be a million times. What I will propose in this article is a way to deal with those blanks— and even a possible way of evaluating the “quality” of present day pitchers’ ERAs.
As everyone knows the ERA is extremely situation dependent; i. e., if a pitcher gives up six runs before there is an out in an inning, he’s charged with six earned runs, and conversely, if he gives up those same six runs after there are two outs and a man has reached on an error, his ERA will remain as pure as a celestial virgin. These are two extreme cases, yet these are the types of things we have to keep in mind when trying to assign an ERA to a pitchers whose earned runs are not known.
A Short History of Futility
Even before I decided to look for a way to estimated ERAs, I had started using Run Average, which is calculated the same way as ERA, but differs in that it uses Total Runs Allowed rather than Earned Runs. (I believe that both Tom House and Bill James have written about RAVG or R/9 inn as being a much fairer way of evaluating pitchers, especially today.) While this method of rating pitchers was more than adequate for my final league averages projects, it didn’t seem like any would be talking about Bob Gibson’s 1.45 RAVG in 1968. The more I calculated RAVGs, the more I knew I would have to come up with a way to calculate ERAs.
Over the last year or so, I tried several methods to come up with an estimated ERA, none of which turned out to be very useful.
To begin with I thought that it would be very easy to come up a correlation between Team Fielding Average and the percent of Earned Runs Allowed. I figured that that all I would have to do was to put league fielding averages for every league in the history of Major League ball with its corresponding percentage of Earned Runs into a spreadsheet, sort— and that would be that.
Well, about the only thing I found out after spending a few days on that was that .825 FAVG had less earned runs associated with it than a .986 FAVG. The numbers in between, though broadly progressing, jumped all around. It turned out to be useless as a tool, even after I averaged all the percentages at any given fielding average.
After that I started to try weird stuff like use FAVG plus Opponents On Base Average compared to earned runs. (What did Dr. Hunter S. Thompson once write? Was it: “When the going gets weird, the weird get weirder”?) Needless to say, the more things I tried, the further away from my object I got.
Tomorrow I will explain how I stumbled on the solution to my dilemma. I will also post my first chart to use with what I came up with.