A new wave of baseball statistics and the study of them, known as sabermetrics, has come to the fore in the last two decades or so. My familiarity with it, barely more than passing, dates from 2003. In that year Michael Lewis published his book Moneyball: The Act of Winning an Unfair Game.1 The book chronicled the efforts of Oakland Athletics general manager Billy Beane and his assistant, Paul DePodesta, to assemble a low-payroll but winning team for Oakland. They succeeded in doing just that. Later a Beane follower, Theo Epstein, took similar learning and methods to Boston, using new-age statistics in helping assemble teams for the Red Sox. The movie Moneyball (2011, starring Brad Pitt and Jonah Hill) took Beane and DePodesta’s efforts to the screen.
The progenitor of new ways of evaluating baseball players and teams is Bill James, who, beginning in the late 1970s, published numerous books (his “abstracts”) outlining his approaches and the incomplete nature of traditional baseball performance measures. Early on James’s annual abstracts appeared in mimeographed form and are the movement’s oldest antecedents.2
Practitioners of these new arts are known in certain quarters as “sabermathematicians,” after the Society of American Baseball Research (SABR). An inner circle of SABR members holds a sabermathematicians meeting each year. Practitioners of those somewhat arcane arts are denominated, often pejoratively, as “stat rats.” Another more neutral moniker for the movement is Rotisserie Baseball, derived from the name of a Manhattan restaurant (La Rotisserie Francaise) where Sports Illustrated writer Dan Okrent began convening study groups in the early 1980s.3
Some of the performance measures the new approach has generated include the following:
• Runs Created (RC); RC = total bases (hits + walks)
• Range Factor (RF); RF = assists + putouts
• WARP: wins above replacement player (i.e., next man up)
• OPS: on-base percentage (OBP) plus slugging percentage; a statistic giving insight into possession of the blend of attributes considered most desirable in a player4
• WHIP: walks plus hits per inning; shows a pitcher’s propensity for allowing, or not allowing, runners on base
If you have ever attended a game in which marquee high school or college players are involved, you have probably seen the Major League scouts. Each scout carries four items: a folding aluminum lawn chair, a seat cushion for the chair, a thick briefcase filled with statistics and reports on every amateur player of note, and a handheld speed gun. The scouts position themselves behind the backstop, where they can begin clocking the speed of the teams’ pitchers. At games in which one or two reputed “pheenoms” are scheduled to appear as many as seven or eight scouts hover.
All, or most all, of these scouts rely primarily on similar “sight based scouting prejudices: the scouting dislike of short right handed pitchers, or the distrust of skinny little guys who get on base, or the scouting distaste for fat catchers.”5 The scouts look for players with Major League looks: six feet three, 195 pounds, and Hollywood handsome. They look for young eighteen- and nineteen-year-old pitchers whose fastballs travel at ninety-five, ninety-six, or ninety-seven miles per hour. Twenty-three or twenty-four is already too old.
The junk ball pitcher Jamie Moyer, who successful pitched for many years at Seattle and then Philadelphia, had a fastball that clocked, at best, in the low eighties. Had the decision whether to introduce Moyer to organized baseball been left to the scouts, almost all of whom think alike, Moyer’s career would have been as a high school teacher or an insurance salesman, rather than as a star pitcher.
Scouts actually carry checklists. “Tools is what they call the talents they [check] for in a kid. There [are] five tools; the abilities to run, throw, field, hit, and hit with power.”6 By contrast, to a sabermatrician, “foot speed, fielding ability, even raw power tend to be dramatically overpriced. The ability to control the strike zone [is] the greatest indicator of future success. The number of walks a hitter draws [may be] the best indicator of whether [the player] understands how to control the strike zone.” Sacrifice bunts, long considered a valuable element in a “small ball” baseball offense, are not considered valuable at all because they result in one more out.
The logic of new approaches to evaluating baseball players’ performance, which, for instance, results in severe downgrading of the sacrifice bunt as an offensive tool, is compelling, as practitioner Eric Walker once wrote:
Far and away—far, far and away—the most crucial number in baseball is 3: the three outs that define an inning. Until the third out, anything is possible; after it, nothing is. Anything that increases the offense’s chance of making an out is bad; anything that decreases it is good. And what is on-base percentage? Simply put, it is the probability that the batter will not make an out. When we state it that way, it becomes crystal clear that the most important isolated statistic is the on-base percentage.7
Every batter, then, should think and attempt to act like a lead-off man and adopt as his main goal getting on base. After that almost every batter should possess the power to hit home runs, in part because home run power forces opposing pitchers to pitch more cautiously, leading to walks and higher on-base percentages. And home runs will clear those bases that, ideally, will previously have become loaded with runners on them.
The most important team statistic, then, is runs scored. As far back as 1979, Bill James had written, “I find it remarkable that, in listing [teams’] offenses, [baseball leagues] will list first—meaning the best—not the team which scored the most runs but the team with the highest batting average.” The Jamesean observation reflects that most of organized baseball—the owners, its managers, coaches, and players—remain “thoroughly inoculated against outside ideas.”8 Batting averages, sacrifice bunts, and hit-and-run plays remain lodestars. The baseball religious wars continue.
The most ubiquitous and traditional measure is fielding percentage. If, for instance, over a season, an outfielder has five hundred chances, that is, baseballs hit his way, and he muffles twenty of these, his fielding percentage is .960. If he muffles one hundred of them, his fielding percentage drops to .800 (and he may well be demoted to the minor leagues, or worse).
On a wider basis, however, fielding percentage may not be very reliable at all as a measure of performance. The safest way to have a high fielding percentage, being perhaps error free, is to be too slow to reach the ball in the first place. Often the daring, fast fielder may have a fielding percentage the same as or even lower than the plodding defensive player with the circumscribed small range. So Bill James and those who follow his approach have substituted another statistic, “range factor,” for fielding percentage. They compute the number of successful plays a player has made in the field per game, not the number of successful chances out of all chances for that player.
In this book I have used the traditional measures of a baseball player’s performance. For hitters the relevant statistics include plate appearances (ABs), batting averages, runs-batted-in (RBIs), home runs, runs scored, strikeouts, and slugging percentages. In the field, for position players the relevant statistic has been, for me, the old-fashioned and outmoded fielding percentage. For pitchers the focus remains on the win-loss record, the earned run average (ERA), and the ratio of strikeouts to walks.
I do not use any of the newer—in almost all cases more accurate or useful statistical methods for several reasons. First, I simply do not fully understand many of the new measures of baseball performance. I have Michael Lewis’s and Bill James’s books on my bookshelf, but it would be presumptuous of me to purport to apply them to yesteryear’s performances. I am a newbie, a neophyte in the area of sabermetrics.
Second, those statistics generally are not available for yesterday’s baseball players. It would be hard work to compute them today. True, Web sites exist that claim to provide access to the box score of every Major League game ever played, or at least as far back as a book of this nature might require. But ferreting out game-by-game performances for Larry Doby and other players of that era and deriving new-age statistical comparisons from those statistics seems a prodigious, if not impossible, task.
Third, Doby played in a different era, when the game was more of a pitcher’s than a hitter’s or power hitter’s game. So his numbers may be lower than they otherwise would be. A big difference between then and now was the height of the pitcher’s mound. The pitcher’s mound in the forties and fifties was supposed to be fifteen inches higher than home plate, but authorities rarely policed it. As a result the pitcher’s mounds in some Major League parks were twenty or more inches higher than home plate. For instance, Shibe Park in Philadelphia was known for having a greatly elevated pitcher’s mound. From a higher mound a pitcher’s downward weight shift and momentum were much greater, enabling him to generate greater velocity on this pitches. In 1968, though, Major League Baseball lowered the pitcher’s mound to no more than ten inches above home plate, where it remains today.9 Major League officials also policed the requirement. The results were significant increases in the number of players hitting more than twenty or thirty home runs in a season.
Fourth, overall I am reminded of Bill Bryson’s admonition in his book One Summer: America, 1927: “It is generally futile and foolish to compare athletic performances across decades.”10