“Pity the sophomore. You are feted as a freshman, but no one seems to care that you’re back on campus … With the newness of college gone, malaise sets in.” Thus begins a 2013 New York Times article entitled “The Sophomore Slump.” But the struggle for second-year college students to get motivated when they feel like “overlooked middle children” has become only one of several uses of the phrase. There is even a Wikipedia entry on the topic, which defines the slump more broadly as “an instance in which a second…effort fails to live up to the relatively high standards of the first.” This latter definition is an apt description for the downturn in performance among second-year athletes after a promising rookie season.
In baseball, the sophomore slump phenomenon is widely accepted as common knowledge. For example, 2002 National League (NL) Rookie of the Year (ROTY) Eric Hinske himself expressed hope that he could avoid the sophomore slump right after accepting his award (unfortunately, he could not). That being said, there has been no shortage of statistical analyses investigating whether the slump holds true. Many of these analyses focus primarily on ROTY award recipients. University of Georgia Baseball Analytics Intern Chris Gauthier found that, of the 12 position-player ROTYs from 2010 through 2018, only three improved upon their rookie OPS+ (a popular all-encompassing offensive metric) and WAR (wins above replacement player, which takes offense and defense into account) per 100 plate appearances (PAs) in their sophomore season. The only player to improve on both his rookie OPS+ and WAR was Kris Bryant, the 2015 NL ROTY. During the same time frame, six pitchers were ROTYs, and only two improved upon their rookie ERA+ (the amount of runs given up per nine innings, relative to that year’s league average) and WAR per 100 batters faced. The two for both categories were Craig Kimbrel (2011, NL) and Jacob DeGrom (2014, NL). Further, the average ROTY during this time period saw worse numbers in batting average, on-base percentage and slugging percentage for hitters and ERA, strikeout percentage, and walk percentage in pitchers. However, the average performance declines don’t appear statistically significant, and I would hesitate to draw conclusions from a miniscule sample size of 12 and six for hitters and pitchers, respectively.
Another analysis that focused on ROTYs was done by Taylor & Cuave (1994). They found that home runs declined significantly in ROTYs’ sophomore seasons, even more so than a typical regression to the mean (RTM) would suggest. However, batting average declined in such a way that was consistent with ordinary RTM. RTM is commonly used by baseball statisticians to refer to the statistical phenomenon that over time, each sample of data will cluster around the true average (or mean), and when a particular sample is especially striking (i.e., a very good streak or very bad slump), an observer should expect that the next one will fall closer to the average.
Wetcher-Hendricks (2014) suggests that the reason RTM might seem to impact ROTYs more is not because there is some other effect in play, but rather because the population of ROTY recipients is self-selecting for those most ripe for regression. Since the ROTY award is based on just one season’s worth of data and goes to players who performed best in that sample, one could reasonably expect sophomore performance to decline towards the true average of each recipient. This decline would likely be larger than a typical decline after a good season because the award only goes to those who had outstandingly good seasons. For this reason, Wetcher-Hendricks describes the sophomore slump as more of a “Freshman Fluke … a statistical, rather than an athletic phenomenon.”
But what of the explanation Evan Altman, editor-in-chief of Cubs’ Insider, provides for the slump? The one most commonly heard around baseball clubhouses? For coaches and players alike, baseball is a game of adjustments. If a hitter torches the high fastball, start throwing him low ones. If he starts hitting the low ones too, start throwing him more curves. But, if he can’t adjust to the low fastball, there is no reason to stop throwing it, and his performance will decline accordingly. When rookies arrive at the major league level, common knowledge holds that players are not as familiar with the strengths and weaknesses of the newbies. So, it might take around a year for the rest of the league to figure out how best to oppose a hot-starting rookie.
Could this adjustment explanation account for some of the sophomore slump along with statistical RTM? I performed my own analysis, this time not only focusing on ROTYs, but all rookies. While ROTYs might be subject to more RTM than other rookies, as other good rookies’ tendencies become clearer, they too will have to face adjustments the league makes. Not to mention, including all rookies provided me with a larger sample size. Lastly, I also theorized that my most important results would come from separating good rookies from poor ones, because poor rookies likely aren’t subjected to much change in strategy from their opponents.
I defined rookie hitters as those who exceeded their rookie eligibility (130 career at bats) that season. I began by comparing the performance of 2018 rookie hitters with their sophomore stats. The main metrics that I used were WAR per 100 PAs and wRC+, the latter of which is a (superior in my opinion) alternative to OPS+. I performed analyses using paired-samples t-tests.
The mean wRC+ for the 62 rookies in 2018 was 94.6 (average is 100), while the mean for their sophomore seasons was 90.1, an insignificant (p = 0.42) decline. However, splitting the rookies into those with above average (n = 29) and average or below (n = 33) wRC+s in their rookie years yielded interesting results. The above average group saw an almost 17 percent decrease in average wRC+ in their sophomore seasons, going from 121.1 to 100.7 (p < 0.005), barely above average anymore. On the other hand, the average or below group saw an almost 12 percent increase in their average wRC+, going from 71.4 to 80.9. However, this result was not significant (p = 0.22).
For WAR per 100 PAs, the mean for all 62 rookies was 0.255 in 2018 and dropped to 0.184 in 2019, an insignificant difference (p = 0.34). The above average group (n = 28) saw a massive nearly 60 percent decrease, going from 0.63 to 0.26. For reference, average starters typically accrue 500 PAs per season and 2 WAR. On the flip side, the below average group performed slightly worse than replacement level in their rookie years (-0.06) and improved to 0.12 in their sophomore seasons. Still, the difference was far more significant in the above average group (p < 5^-5 vs. p = 0.089).
Basically, if you glazed over this numbers dump, I found that rookie position players on average saw small drops in their performance level. This was due almost entirely to hitters who were above average in their rookie year and fell off a cliff their sophomore season; hitters who were below average as newcomers improved the next year, but not nearly enough to offset the flagging performance of their (at least formerly) superior counterparts. Now prepare for another numbers dump.
Upon further investigation, I found that this effect replicated for 2018 rookie pitchers as well. I defined rookie pitchers as those who exceeded their rookie eligibility (50 innings pitched) that season. I used ERA- (similar to ERA+, except lower is better; average is 100) and WAR per 100 batters faced (BF). A paired-samples t-test revealed that the average ERA- for rookie pitchers was 103.4, which increased to 111.3 their next season (a relatively insignificant difference; p = 0.10). For pitchers whose ERA- was below 100 their rookie season (n = 36; these were the above average performers), their average increased nearly 25 percent, a significant difference (p < 5^-5), from 79.4 to 104.9. Those who began their career with an ERA- of 100 or above (n = 36) saw their average decrease by only around 8 percent, an insignificant difference (p = 0.15), from 127.4 to 117.7.
For WAR per 100 BF, the average for the entire sample was about 0.20, which decreased markedly (p = 0.08) to 0.12 in 2019. But this effect was largely due to those who were above average for this metric (n = 36). They saw their WAR decrease by more than 46 percent, from an average of 0.39 to 0.21 year-to-year (p < 0.001). Those who came in at or below average saw a large relative improvement in performance, going from 0.001 to 0.04, but this insignificant (p = 0.36) 0.039 gain pales in comparison to the .18 loss for the former group.
Again, the key here is that rookie pitchers on average performed worse in their second year. Largely, this was because pitchers who were above average as rookies struggled as sophomores. Pitchers who struggled as neophytes made gains the next season, but those gains didn’t outweigh the difficulties that the above average rookies faced.
Teams place more emphasis on assessing the strengths and weaknesses of their troublesome rookie opponents, especially during the offseason, and that is why the shift in their performance going into their second season is more precipitous than the shift for harmless foes, even though inept rookies began further from the mean in many instances than their well-to-do counterparts. Pure RTM would have likely shown the largest change for whoever started further away from the league and sample averages. Yet, the adjustment effect and RTM can also be thought of as one and the same; the inevitability of your opponents making adjustments is baked into RTM in baseball. But the significant difference in the shifts suggest that in the very least, RTM is sped up among the better rookies through increased emphasis on their styles of play.