Skip to main content

RISE AND FALL OF THE SAVE

It turned relievers into stars, reengineered pitching staffs and altered the way fans view the game. THE SAVE fundamentally changed baseball—and it was just as divisive 50 years ago as it is today

BY AND large, baseball's counting stats define tangible events that look exactly as they sound. Anyone can see a home run, a walk, an RBI. There might be some fuzziness around the edges—you can quibble with the logic behind the awarding of a pitching win—but the criteria are still straightforward. These numbers are clear, or as clear as they can be. They're the building blocks of both the game and the box score.

And then there's the save.

More an interpretative definition of an act than an act itself, and only possible under specific criteria, the save has not simply measured relievers' performances. It's shaped them. After Major League Baseball officially recognized the statistic in 1969, the save began to influence teams' approach to relief pitching—"one case of baseball statistics actually changing strategy," as Alan Schwarz wrote in his history of the sport's stats, The Numbers Game. It created an entirely new standard by which to judge relievers, and, in turn, a new motivation for teams to pay them. Fifty years after its birth, the save has helped make millionaires and revolutionized relief.

Which makes it all the more remarkable is that the save was born after more than a decade of trial and error, by way of a battle between baseball writers and teams, out of a sea of competing definitions. It introduced an arbitrariness previously unseen in mainstream baseball statistics. And it, quite literally, changed the game.

TO THE extent that most baseball fans know anything about the birth of the save designation, they know this: Jerome Holtzman, a beat writer for the Chicago Sun-Times covering the Cubs, invented the stat in 1959, and baseball adopted it as an official statistic in '69. This version isn't wrong—but it is incomplete, just one piece of a saga that included years of dramatic rule changes and briefly threw the scorekeeping system into chaos. "What is a save?" was not first asked by Holtzman in '59, and it certainly was not last answered by MLB in '69.

In a general sense, "save" has been used to describe a quality effort by a reliever for just about as long as baseball has had relievers—which is to say, just about as long as modern baseball has existed. The first recorded mention of the term was in 1907; it appears in Ty Cobb's '15 memoir. But a "save" was only a loose idea rather than a statistic, and a "reliever" was typically just a starter entering a game between his regularly scheduled appearances. This changed over the decades, and by the late '30s it was not unusual to see a dedicated full-time reliever. Yet as they grew more popular, it became clear that there was no adequate way to measure their work.

Enter baseball's first full-time statistician, Allan Roth. Brooklyn Dodgers president Branch Rickey, sensing an opportunity to gain an edge with a dedicated mathematician in the front office, had hired him in the 1940s. In '51, Roth set about tracking the team's relievers, and he came up with the first formal definition of the save: Any nonwinning relief pitcher who finished a winning game would be credited with, no matter how large his lead. If the team won, and he finished the game, he'd earn a save.

The system was imperfect—had a reliever "saved" anything if he entered with a double-digit lead?—but the basic concept began to spread to other teams, to reporters, and to pitchers themselves. From the beginning, the metric was linked to a reliever's earning potential. "Saves are my bread and butter," Cubs reliever Don Elston told The Sporting News in 1959. "What else can a relief pitcher talk about when he sits down to discuss salary with the front office?"

Before long, the save made its first major evolution. In 1960, the stat had a new formula, a new architect, and a new principle to prove.

Holtzman had spent the 1959 season watching Elston and teammate Bill Henry, and he suspected that they were among the best relievers in baseball. However, a different pitcher was getting the attention: Pirates reliever Elroy Face, who had gone 18--1 and been rewarded with a seventh-place finish for NL MVP. There was just one problem, Holtzman figured: Face hadn't been that good.

"Everybody thought he was great," Holtzman, who died in 2008, told SPORTS ILLUSTRATED in 1992. "But when a relief pitcher gets a win, that's not good, unless he came into a tie game. Face would come in the eighth inning and give up the tying run. Then Pittsburgh would come back to win in the ninth." (In five of his wins, Face entered with a lead and left without one.)

So Holtzman set out to create his own definition for the save, with criteria much stingier than Roth's. In order to be eligible, a reliever had to face the potential tying or winning run, or pitch one or more perfect frames with a two-run lead. If neither of those situations applied, there was no save opportunity. In 1960, Holtzman began using this formula to track his version of the stat around the game, and a leaderboard was regularly published in The Sporting News.

Soon, the save was everywhere. Fans heard the term from managers, players, reporters. It just wasn't always clear what they meant. Holtzman's definition was popular, but there were teams who evaluated their relievers with a system like Roth's, and there were those who used a formula in between. Johnny Keane, who managed the Cardinals and the Yankees, ignored these definitions and concocted his own—he kept a little black book in the dugout, saying that he made a note "if a pitcher does a good job in protecting the lead," even if he did not close out the ninth.

The situation led to plenty of questions about statistical rigor, but it led to philosophical reflection too. What was a save supposed to measure? A box score had never before had to grapple with questions like these.

The Baseball Writers Association of America decided it had to take action.

In 1963, the BBWAA convened a committee to propose an official save statistic, ultimately settling on the original formula from Holtzman, with one tweak—if a reliever had two or more perfect innings with a three-run lead, he could qualify for a save too.

The BBWAA realized that teams with other statistical formulas might not be thrilled about making the switch, so it didn't ask for the formula to be implemented right away. Instead, the committee suggested, baseball should use this system for a trial period of one year. If teams wanted to propose changes afterwards, they could do so; then, the save might become official. The American League's clubs took a vote and agreed to try it. The National League, however, wasn't so eager. Roth's Dodgers liked their existing formula, and so did several of their fellow clubs. They didn't want to change their record books.

THE NL'S decision received a wave of bad press—writers put the league on blast for blocking the statistic, and the little number caused a big headache. "This issue has been blown completely out of proportion," NL publicity chief Dave Grote griped to The Sporting News in May 1964, a month after the season had begun.

Proportional or not, the criticism worked. The NL bowed to the pressure and decided to use the standardized save system for the rest of the 1964 season. The BBWAA had exactly what it wanted: both leagues' cooperation to track saves under a standard system for the season. But it didn't end as the writers had hoped. After '64's trial period, teams did not move to endorse the save as an official statistic and would not touch it for several years—even as the save continued to spread, in media coverage and salary negotiations and casual conversation. After '68's Year of the Pitcher, however, baseball found plenty to change about pitching. The Playing Rules committee shrunk the strike zone and lowered the mound, and, from the scoring rules committee, there was one more announcement: for '69, the save would be an official statistic. It wasn't the BBWAA's originally proposed formula, though. It wasn't Holtzman's, either. Instead, it was Roth's. The committee was working on its own, without a proposal from the BBWAA, and so they did not have to cater to the writers' definition. If a reliever entered with a lead, finished with a lead, and could not be credited with the win, he'd earn a save.

After Detroit's John Hiller set a record with 38 saves in 1973, he criticized the metric: "There's no way a relief pitcher can show what kind of year he's had.... Some saves are very important. Some are ridiculous."

For 1974, the scoring rules committee decided to change. To earn a save, a reliever would henceforth have to face the potential tying or winning run, or pitch at least three perfect innings to preserve a lead. It was closer to the definition that had originally been proposed in '64. This didn't mean that it was widely embraced, however. Writers were upset that a drastic change had been made so quickly, without outside consultation, and relievers were frustrated that their core stat had been made so difficult to earn. "The baseball scoring rules committee, in its infinite wisdom, has changed the definition of a 'save' for relief pitchers without doing any research whatever," snipped The Sporting News.

Unsurprisingly, saves plummeted. In 1973, 42% of games ended in a save. After the switch in '74, the rate fell to 27%.

It was maddening for relievers, confusing for fans, and embarrassing for the rules committee. For 1975, the BBWAA formally proposed another change to the statistic, requesting the third official save formula in three years. Its Goldilocks solution was more reasonable than '74's, less generous than '69's. A pitcher would be credited with a save when:

• He was the finishing pitcher;

• He was not the winning pitcher;

• And he met one of three conditions: a) he entered the game with a lead of no more than three runs and pitched for at least one inning; b) he entered the game with the potential tying run either on base, at bat, or on deck; or c) he pitched effectively for at least three innings.

The committee took this one seriously, and it was approved. Baseball finally had its save—not Roth's, not Holtzman's, but something entirely different—and this one stuck, albeit not always smoothly.

Look no further than one of the first relievers to issue a verdict. "It's not only about sports writers and fans who don't understand what relief pitching's all about," fumed 1974 NL saves leader and Cy Young winner Mike Marshall. "The lords of baseball obviously don't understand either—the ones who make the rules." Decades later, this version of the save is still here, and so, too, is the debate over just who understands how well it really works.

"THERE'S NO WAY A PITCHER CAN SHOW WHAT KIND OF YEAR HE'S HAD.... " SAID HILLER. "SOME SAVES ARE IMPORTANT. SOME ARE RIDICULOUS."

NUMBERED DAYS

Looking back at notable moments in the history of the save

1960s

'60

Sportswriter Jerome Holtzman begins tracking the save.

'63

The BBWA proposes a save stat that is approved for a trial run.

'69

Save becomes an official stat; L.A.'s Bill Singer (left) gets the first.

1970s

'74

A fulltime reliever, Mike Marshall, wins Cy Young for first time.

'75

The modern save rule is put in place; Goose Gossage leads MLB with 26.

'79

Bruce Sutter (right) is first pitcher used 75% of time in save situations.

1980s

'82

Rollie Fingers becomes first pitcher to reach 300 saves.

'85

For the first time, the Hall of Fame elects a closer: Hoyt Wilhelm.

'88

Tony La Russa (left) uses closer primarily in ninth inning.

FOR MORE ON THE '91 CUP TEAM

Listen to Throwback, SI's new sports-history podcast, free on Apple Podcasts, Spotify or SI.com