As many of you know, I contribute content for ESPN Fantasy Baseball. I was recently tasked with generating my top-250 starting from May 16 through the end of the season. What’s done is done. The instructions were to rank performance going forward. My rankings, to put it kindly, caused quite a stir. So much so, in fact, that a reader felt compelled to honor me with a troll Twitter account dedicated to my stupidity.
Interspersed within the comments was the typical Pee Wee Herman rhetoric. Apparently I’m an idiot, a moron, stealing money from ESPN and would be welcomed into a slew of leagues. None of this bothers me. I’ve been called worse and completely understand what comes with the territory. However, there were a couple of insinuations that did get under my skin a bit.
There were several references suggesting my rankings were nothing more than a publicity stunt to draw attention to myself, perhaps in an effort to increase hits for my Insider columns. I was accused of pulling names out of a hat or using a random number generator to come up with my rankings. The irony here is other than maybe my colleague Tristan H. Cockcroft’s, mine were the only set completely formulaic, spreadsheet driven projections. I guarantee mine were the least subjective of the lot. This is not to say my way is right and the others were wrong, I’m simply pointing out the irony, and fallacy of the accusations.
Since the system I used to generate the ESPN rankings is the same engine I use to produce the Mastersball Platinum rest-of-season projection updates, I thought I’d kill the proverbial two birds with one stone and reiterate my philosophy with respect to projections. Platinum subscribers will be receiving a bit more detailing the actual procedure, but since I haven’t done this in awhile, this is a perfect time to wax poetic on my philosophy while briefly introducing the new means I am employing to compute in season projections.
The most important aspect of this discussion is to understand the true nature, meaning and application, of a projection. A projection is a weighted average of a set of logical outcomes. While conventionally a projection is offered as a static number, it is in truth a range. Projections are best thought of as a bell curve with the poorer outcomes to the left and the favorable outcomes to the right. What we call the projection is the apex of the curve. By focusing on a static entity, we often lose sight of the fact a bad year is really just an outcome to the left of the apex while a good year is to the right. Both are within the range of possible outcomes. But yet, if those of us in the business of prognostication say Prince Fielder will hit 35 homers and he hit 29, we were wrong. It’s not that Fielder ended up within the lower end of his range of possible outcomes. We were wrong.
Here’s where my philosophy isn’t universally shared and is annually called into question on our message forum. For me, a projection is completely objective. The secret sauce fueling my projection engine is 100 percent numbers driven. What’s good for the goose is good for the gander.
In order to abide by such a philosophy certain traits are necessary. You have to be disciplined. You need to be conscientious so the secret sauce is always reflective of the most current research. You have to be obstinate. But perhaps most importantly, you need to have incredibly thick skin so you can accept being wrong.
This may seem counter-intuitive and downright ridiculous if you don’t truly understand projection theory. The objective is not to be right (which is what the masses shoot for). Trying to be right introduces the subjective bias that I avoid. The goal is to identify the most probable outcome. This is the ultimate irony to some of the comments on my ESPN rankings. Because I did not follow the herd on several players, the interpretation was I was being too cute in an effort to delineate myself and be able to say I was right. Whereas, the reality is that’s where my completely objective spreadsheet said to rank the player.
By subjective bias, I am referring to the act of treating two players with a similar trait differently. How many times have you heard a player is in store for a good year because of a solid second half or even a great September? My response is to pick out another player with a strong second half and ask why they aren’t being afforded the same credit. What’s good for the goose is good for the gander. Either everyone gets a bump for a better second half or no one does. And if this is the case, the criteria is no longer subjective, but objective.
In an effort to be right, many project numbers to the left or right of the apex. To be clear, this is not the same as betting on the come, and purposely drafting or buying the upside of a player. I’ll do that all the time. I am speaking of subjectively projecting a player to do better or worse that what the number say for one reason or another.
This is where things get hairy and is exactly akin to the old school versus new age scouting conundrum. There was a time each spring where my cell phone would ring and someone who’s opinion I trust that may or may not have founded this site, moved on to work at ESPN and is now a professional scout would be on the other end, sharing a tidbit about a guy with a new pitch or a reworked swing to generate more power. I’d like to think I’m good, but I don’t have a way to work that into my secret sauce other than to subjectively change a strikeout rate or HR/FB ratio, etc. And, we all do that. It’s just that some use less salient information all the way up to the extreme of a whim.
Here’s an interesting way to think about it. I’m going to roll a pair of dice 36 times. Based on probability, here is the range of likely outcomes:
2 and 12 – 1 time
3 and 11 – 2 times
4 and 10 – 3 times
5 and 9 – 4 times
6 and 8 – 5 times
7 -6 times
That's actually what a projection should look like.
What if I were to say I am going to do a single roll and ask you to predict the outcome, what would you say? Objectively, you should say seven. Anything else is subjective. The analogy is far from perfect, but some projections will choose a number other than seven in an attempt to be right. I’m only going to do that if I know for a fact one of the dice is loaded. And even then, I’m going to feel dirty afterwards and apologize profusely to my spreadsheet for overriding it.
With that as a backdrop I’d like to share a Cliff Note’s version of how I generated the in-season projections used in the ESPN rankings as well as for the Platinum subscribers. But first, a nutshell review of the general process is necessary. I’ll focus on hitters; the same principle applies to pitching
Everything is skills based using the plate appearance (PA) as a foundation. Using BB/PA the number of walks is determined. Similarly, HR/PA yields the number of home runs while K/PA renders the number of strikeouts. Subtracting walks from PA leaves at bats. We already know how many of these AB are HR and whiffs. Using BABIP, the number of non-home run hits can be computed. These can be separated into singles, doubles and triples based on history. We now have almost everything except RBI, runs and SB. I have proprietary formulas that produce these stats based on team tendencies, batting order, etc. We now have our projection.
I use the exact same means to generate the in-season projection. The trick is adjusting the skills based on the limited sample as well as fleshing out the luck element, particularly with respect to BABIP and HR/FB. But even those entail a skill element so that what is not skill is luck.
I’ll spare the details, but there is some very interesting work out there with respect to when certain skills stabilize. In fact, very recently this work has been updated so the soon to be discussed regression is better defined. To give credit to where it is due, I am referring to the work of Russell Carleton (Pizza Cutter) and Tom Tango (Tangotiger). They’re both well respected within the SABR community. A Google search will avail the work to which I refer.
What I do is use the skills stabilization data to regress the current skill level to the historical level. Let’s say one of the aforementioned skills showed 50 percent stability at 300 plate appearances. This means at 300 plate appearances, there is a 50 percent chance the current level is real. So when the player has reached 300 plate appearances, his new skill is an average of current and originally projected.
Anything fewer than 300 plate appearances is treated linearly, even though the relationship is not truly linear. I just don’t have the ability to program the non-linear relationships into my engine. The difference is going to be minimal; regressing in the linear manner does the job just fine. Keeping with this example, after 100 plate appearances, the current portion of the weighted average would be 50 percent times 100/300 or 16.67 percent leaving 83.33 percent as what I projected coming into the season. I regress all the above skills in this manner and plug them into the black box to generate the new projections, which obviously also encompasses my admittedly subjective estimate of playing time.
Different skills stabilize at different rates. What’s getting me into trouble over at the World Wide Leader is contact rate stabilizes very quickly and since Jay Bruce has opened the season by fanning at an elevated rate, this is captured by my engine and reflected in a low average resulting in a ranking being ridiculed left and right. Jay Bruce has a history – he’ll end up right where he always does and I’m an idiot for saying otherwise. Well, in another life I was a scientist and we’re trained to believe facts generated by research as opposed to intuition and the fact is it is probable that Jay Bruce will fan more than usual so once his lucky BABIP corrects, his average from here on out will be poor. Again, this is the most likely outcome based on current projection theory. Bruce may very well finish to the right of the apex with an outcome better than what I sent to ESPN. But I didn’t put him so low on a whim. I incorporated what I believe is the most current data germane to the analysis. And I stand by that result.
It’s funny, the exact same analysis is saying Chris Davis will not revert to his historical strikeout rate of over 30 percent and has improved to still subpar, but more acceptable 25 percent. This has yielded a rest-of-season batting average much higher than orignally expected, but yet, no one is being chided for jumping Davis way up in the rankings.
What's good for the goose is good for the gander. I’m perfectly fine if I end up with goose egg all over my face come September when Jay Bruce is hitting .260 with his usual 30 HR.