Monday, May 24, 2010

More Endgame Talk

You may have heard about the Atlanta Braves big seven-run comeback in the bottom of the ninth last week versus the Cincinnati Reds. It ended on a grand slam that gave the Braves a 10-9 victory.

But one thing I didn't know about was that the Reds, leading 9-3, missed out on chances to add to the score in both the eighth and ninth inning. In both cases, the Reds had runners on first and second base with no outs, but the third batter hit into a double play and the fourth struck out. It seemed unimportant at the time - up until the Braves remarkable comeback.

The lost opportunities were a topic of analysis in the local SABR forum. (I'm including the link because I think you can sign up, and I'm guessing most of my readers probably would like some of the topics. I hope you can sign up. I'm not really sure how - it's been so long since I joined. And please, be nice.) The question is whether the Reds should have tried playing "smallball" to push an extra insurance run across in those two innings.

I replied:

Some folks may find this interesting. Below is the URL for something called the Win Expectancy Tracker, which shows the probability of winning a baseball game in various situations, based on historical results.

http://winexp.walkoffbalk.com/expectancy/search

Using it, I find that from 1997 through 2006, there were 1993 games where the home team entered the bottom of the ninth losing by six runs. They won three of them. So the visiting team won in that situation 99.85% of the time.

How much more would an extra run have helped? During the same time period, there were 4224 where the home team entered the bottom of the ninth losing by seven OR MORE runs. (Sorry about the OR MORE, but the tool just lumps everything over 6 together.) The home team came back to win just four of those, so they lost 99.91% of the time.

So getting that extra run across would've helped in approximately 0.06% of all games, and that's being generous. How insignificant is that? Let's take a look at another seemingly trivial situation and see how it would rank.

If the lead off batter of the visiting team just gets on base at the beginning of the first inning, he's improved his team's chances of winning 4.4%, or about 70 times more than that single extra run should've helped in the ninth inning.

One could do a similar analysis on the question raised yesterday - at what point do you really need a closer? So, if there wasn't a "save" statistic, at what point does the percentage chance of winning a game justify putting in someone other than your best reliever? So let's use the Win Expectancy Tracker to see historically what percentage of games were won by the home team carrying various leads or deficits into the top of the ninth.

Leading by 5 - win 99.7% of the time
Leading by 4 - win 98.8% of the time
Leading by 3 - win 98.0% of the time
Leading by 2 - win 94.5% of the time
Leading by 1 - win 86.6% of the time
Tied - win 52.2% of the time
Losing by 1 - win 15.2% of the time
Losing by 2 - win 6.3% of the time
Losing by 3 - win 2.9% of the time
Losing by 4 - win 1.3% of the time
Losing by 5 - win 0.6% of the time

Looking at those odds, I suppose you can make a pretty good case that whoever tries to hold a three run lead should be the same guy that tries to hold a four run lead. But I would argue that you don't need your best reliever to try and hold a three-run lead, either. 98% of the time a three run lead is safe, for chrissakes.

For the home team, the times a closer should be used include holding a one-run lead and a two-run lead. It also certainly includes a tie game, where giving up a single run decreases the chances of winning by 35%. I suppose one could even make a case for using him when losing by a run, since that second run decreases the chances of winning the game by almost 9%.

So I think it's a fair question to ask what the difference is between protecting a three-run and a four-run lead. But to claim that a closer needs to be used to protect a game that is already won 98.8% of the time seems a little severe.

15 comments:

Anonymous said...

Let's not forget, that 98% already takes into account the fact that closers are working the 9th inning in 3 run games most of the time.

I bet it drops at least a few percentage points when lesser relievers are used in 3-run ninth innings

TT said...

"But I would argue that you don't need your best reliever to try and hold a three-run lead, either. 98% of the time a three run lead is safe, for chrissakes. "

Uh, no. That is only true in a universe in which every team uses its best reliever to hold a three run lead. You have no idea what would happen if they all used their mopup guy. Probably a very different result.

In a 162 games, changing the result 2% of the time is a 3 game difference. That would have kept the Twins out of the playoffs the last two years.

TT said...

One other thing. The "percentage of games were won by the home team carrying various leads or deficits into the top of the ninth." is he wrong data point. There is nothing that happens in the top of the inning that will change that. What happens effects the win probability going into the bottom of the inning.

The question is how much more likely is the home team to win/lose going into the bottom of the inning if the visitors score. You need to compare the win probability going into the top of the inning with the probability afterwards.

Also a team is 80% more likely to lose with a three run lead as they are with a four run lead. 1% compared to 1.8%. That is pretty big difference.

John said...

Let's just clarify a few things...

1. This data wasn't just teams that followed "closer much pitch in a save situation" rule. It went back to 1977, which is before that rule was so strictly followed. Though, I mostly agree with this.

2. Changing the result 2% of the time would only result in a 3 game difference if 162 games all had three run leads going into the top of the ninth. That's obviously not the case. (What more, one could argue that a bug fart could've cost the Twins their division the last two years. I hate that line of reasoning.)

3. I used that data point because the other ends up being silly. The probability of the home team winning at the bottom of the inning with any lead is 100%. And I concentrated on the home team because in the game in question, the Twins were the home team.

4. I have an 80% better chance of Cheryl Tiegs showing up at my house covered in chocolate than Elle McPhearson, because Tiegs has some ties to MN. But I'm not holding my breath for either one.

TT said...

"a bug fart could've cost the Twins their division the last two years."

Exactly, baseball is a game of inches. Which is why discounting each one of the small things that win baseball games because they are intuitively insignificant is bad analysis. Those small things add up and are often the difference between who makes the playoffs and who doesn't.

"The probability of the home team winning at the bottom of the inning with any lead is 100%."

Again, that is exactly the point isn't it? If the closer is successful and no runs are scored the chances of winning are 100%. But if the closer gives up three or more runs, the chances of winning are something less than 100%.

Comparing difference in the score at the top of the inning tells you nothing about how important getting the next three outs is.

"It went back to 1977, which is before that rule was so strictly followed."

That is 33 years ago. Remember Ron Davis (I try not to). He was the Twins closer in the early 80's. That is 25-30 years ago and he was hardly an innovation. Your sample is basically from the era in which closers were used. But it would be interesting to know whether the probability of a team losing a late inning lead changed as starters were used less.

"But I'm not holding my breath for either one."

Don't hold your breath waiting for a game where the home team neither wins nor loses with a lead going into the ninth inning either. The question here is not whether one or the other will happen, but which.

The fact is the chances that the home team will lose the game is cut almost in half by having a 4 run lead rather than a 3 run lead. Between 3 and 2 runs it jumps another 2.5 times. I think those are more useful comparison for determining whether to use your closer.

Of course a key issue is how many times does each situation occur. Because you can only use your closer so many times during the season.

I know I am picking on you. But this is the kind of statistical analysis that drives me crazy. It makes some calculations and then draws conclusions based on a subjective judgment of the numbers' significance.

Nihilist in Golf Pants said...

There is one key consideration besides the odds here.

Is the closer is stressed? If he's just closed three nights in a row or has worked an unusually high amount of innings, I'd put someone else in with a three or four run lead. If he hasn't worked in a day or two, I'd put him in with a four run lead.

On Saturday, there was no good reason not to put a rested Rauch in to start the 9th.

TT said...

Sorry Geek -

"The question here is not whether one or the other will happen, but which. "

I got that wrong. The question was how likely a team was to lose and the chances of their losing in either case are small. But the difference between 1% and 1.8% chance is still significant.

Anonymous said...

This gives me an opportunity to raise an issue that's been bugging me for some time. It seems to me that every pitching change has a non-zero probability of bringing in somebody who's just not "on" that day. So if you've got a guy who's looked good getting 2 or 3 outs in the 8th, why is it axiomatic that you dump him and bring in your closer in the 9th if the game is close? I'd argue that you don't just want your "best pitcher," you want the pitcher who's best for that situation on that day. Which hitters are coming up, who's had how much rest, and many other factors would all be relevant--but wouldn't you put a lot of weight on the performance you've JUST SEEN?

Similarly going into the 8th: if a reasonably fresh reliever has finished the 7th and shown evidence of being locked in, why automatically go to your "setup man"?

Granted, if Mariano Rivera is your closer, then the chance of him blowing a game is low (though not zero, as the 2010 Twins and 2001 Diamondbacks, among others, have seen to our delight!) and probably lower than for whoever pitched the 8th, no matter how good they looked. But if your closer is an ordinary mortal? Say, Joe Nathan at the end of last year? Why not play the "hot hand"?

TT said...

The really interesting part of the data you provide has nothing to do with the question you raise. Its that it clearly shows that in a tie game, the first run that scores in the bottom of the 8th raises a team's chances of winning by 66%,more than all the additional runs combined. The second run raises the team's chances by 10% and the third run raises the team's chances by 4%. If the home team starts the bottom of the 8th down by a run, the first run scored raises its chances of winning by 343%.

Remember that when you use run probabilities, runs are not all equal or even close to it.

Anonymous said...

Quoting TT:

"The fact is the chances that the home team will lose the game is cut almost in half by having a 4 run lead rather than a 3 run lead. Between 3 and 2 runs it jumps another 2.5 times. I think those are more useful comparison for determining whether to use your closer.

...

"I know I am picking on you. But this is the kind of statistical analysis that drives me crazy. It makes some calculations and then draws conclusions based on a subjective judgment of the numbers' significance."

--------------------
TT, this isn't as subjective as you think. If what you want to estimate is how many more wins a team will win, then by definition, that's the absolute difference in win probabilities -- WET(A) - WET(B) -- times the expected number of occurrences of the situation in question. The relative difference in the win probabilities, WET(A)/WET(B) -- or in the loss probabilities, (1-WET(A))/(1-WET(B)) -- is NOT meaningful.

Since Geek's Tiegs / McPhearson example may have been too, um, distracting, let's stick to numbers. You say that the 80% (relative) difference between loss probabilities of 1% and 1.8% is "pretty big", and you also cite ratios of 66% and 343% in a later message. Okay then, how about a ratio of 10,000%? You can get that by comparing a situation with a loss probability of 1 in 1 million to another situation with a loss probability of 1 in 10,000; the ratio is 100, or 10,000%. But that huge relative difference is insignificant in absolute terms: even if the comparison applied to all 162 games, the expected difference in total wins for the season would be 0.016 -- essentially zero.

TT said...

"If what you want to estimate is how many more wins a team will win, then by definition, that's the absolute difference in win probabilities -- WET(A) - WET(B) -- times the expected number of occurrences of the situation in question."

Sort of - its the number of wins the average team won in that situation since 1977 when they have usually used their closer in that situation. Expectations for any individual team are a different issue, as is the issue of what the difference would be if the closer wasn't used.


"the expected difference in total wins for the season would be 0.016 -- essentially zero."

But that is not the case here. The relative historic difference between a one and two run lead over 162 games has been 3 games. In the context of an entire season, That apparently slight difference is important.


Of course there aren't that many games with that situation in a season. But the question here is the relative value of using your closer in each of those situations.

Using your closer to successfully protect a 3 run lead will produce 80% more wins than using that same closer to succesfully protect a 4 run lead. That assumes that the closer has the same impact on the likelihood the team will lose in both cases. That is probably not true any more than the assumption that the current percentages are not a product of how closers are used.

If you want to argue the whole issue is not very important since it doesn't occur very often, that would probably be true. But that is a product of the number of situations, not the relative result. And it is true of a lot of individual things in baseball. In fact, how many games are there where the home team has a one run lead or are tied or are behind by one run? Individually, probably not that many. But the results in those individual situations still add up to the difference between winning and losing seasons.

I suspect you wouldn't find a lot of serious investors who think the difference between a 1% and 1.8% return on investment is insignificant. And if you changed the numbers to 10% and 18%, they would treat the relative value of the two investments the same, even if the absolute number of dollars in return was different.

Anonymous said...

TT, you're making a very good point, but it's getting obscured by screwy numbers.

"Using your closer to successfully protect a 3 run lead will produce 80% more wins than using that same closer to succesfully protect a 4 run lead."

First of all, the numbers have gotten mis-copied. We went from 98.0% and 98.8% to 1.0% and 1.8%, instead of 2.0% and 1.2%. So we're talking about a 60% greater loss probability with a 3-run lead than a 4-run lead, not 80%.

Far more important, though: you do NOT get anywhere close to 60% more wins looking at those two numbers, you get 0.8% more wins! The 60% percent is the additional losses avoided.

This is proving my point: for the same change in win percentages, the ratio of losses looks big and the ratio of wins looks small, and that's because the ratios are meaningless. It's the absolute, not relative, difference that matters.

Put another way, I agree that something that gets you three more wins per season is important--and that calculation uses the absolute differences, NOT the ratios.

Parenthetically, you absolutely would find many investors who would regard the difference between 1.0% and 1.8% rates of return as insignificant--because they wouldn't be bothering with either one! They'd be putting their money in investments paying 6% or 10% or whatever. Which brings us back to Geek's original point, that a 2% loss probability is already pretty small potatoes.

Which in turn brings us to your important point, which is that the observed data only tell us what the win probabilities are with the strategies that were actually used. We don't know what the share of wins would have been if clubs had never used their closers to protect three-run leads in the top of the 9th. Maybe the 98% averages a 99% success rate when the closer starts the 9th and a 95% success rate in the relatively few cases where he doesn't. Or maybe that latter figure is 90%, or... The point is that we don't know what it is, so we can't tell from these data how big a gamble managers would be taking not to use their closers.

That, I promise, is my last word on the subject. Back to work! ;-)

TT said...

"TT, you're making a very good point, but it's getting obscured by screwy numbers."

You got that right - I completely messed up the wins versus losses issue. My point was that if a closer's job is to avoid turning a win into a loss, then he has almost twice as much value with a 3 run lead compared to a 4 run lead.

"you absolutely would find many investors who would regard the difference between 1.0% and 1.8% rates of return as insignificant--because they wouldn't be bothering with either one! They'd be putting their money in investments paying 6% or 10% or whatever."

If you can find me an FDIC insured CD with those returns let me know where :) The point was the relative difference matters when you invest money, and it matters in the decision when to invest your closer's limited appearances as well. If you can find a place to use the closer that will give you a 6% to 10% return with the same risks that is a different comparison than comparing a 3 run to 4 run lead.

"a 2% loss probability is already pretty small potatoes."

And the question, and my initial criticism, is compared to what? Every situation is small potatoes by itself.

Here is the progression of likely losses from 5 to 1 runs with the difference compared to one more run:

Leading by 5 .3%
Leading by 4 1.2% .9
Leading by 3 2.0% .8
Leading by 2 5.5% 3.5
Leading by 1 13.4% 7.9

If you look at those patterns, I think you may see the impact of the closer showing up. Although you have only a small number of data points, the change from 5 to 4 runs is higher than the change from 4 to 3 and then the pattern reverses itself again.

by jiminy said...

This whole argument, at least as it relates to the Twins game the other day, presupposes that your closer is your best pitcher, and by a significant margin. If it's Joe Nathan, that's one thing. If it's Rauch, the only real difference between him and the rest of the guys is that he happens to close, more or less by default. In that case, individual matchups outweigh any mystical quality of "closerness," so a lefty specialist like Mahay facing two out of three lefties is the right call.

And even if you think Rauch does have magic powers now that he has been designated "closer," and has magical "save" statistics to prove how good he's thereby become, it's still the right call, because there was a day game the next day, in which the likelihood of entering the ninth inning with a four run lead was markedly lower. Not to mention that the game was close to 98% won already either way. And that Rauch was facing the bottom of the order.

Personally, I think you could probably make a stronger case for pitching your worst reliever with a four run lead than your best reliever. That's pretty close to the definition of mop up time. Get the scrubs some work. It's like entering the last minute of a basketball game with a ten point lead. You don't have to put out your bench players, though you could if you felt like it. But what you don't want is your star to be driving the lane in traffic and risking a hard foul, when the game is almost over. Save him for another day.

Anonymous said...

"a team is 80% more likely to lose with a three run lead as they are with a four run lead. 1% compared to 1.8%. That is pretty big difference."

I think you're confusing the difference between 1% : 1.8%, and 98% : 98.8%.

The difference between 1 and 1.8 is, indeed, 80%.

The difference between 98 and 98.8 is, indeed, NOT 80%.

Nice try, TT.