Slumping Sophomores Part II

In last week’s post below, I talked about the “Sophomore Slump” and why top performers can rarely keep performing at the same level.  I also mentioned that for similar reasons, the worst performers generally improve.  To demonstrate this statistical phenomenon, I highlighted the bottom 5 NBA players (in terms of their year-to-date average Fantasy Points per Minute – FPPM) on my fantasy basketball site, with the prediction that they would improve this week.  Well, let’s check in on them this week and see how the experiment turned out!

Mike Miller (CLE) – went from 0.23 to 0.34.

Nik Stauskas (SAC) – 0.29 to 0.50.  This guy took off with back-to-back games with an average of one fantasy point per minute.  He had 9 points, 3 rebounds, and 2 blocks in a game.

Will Barton (POR) – 0.35 to 0.54.  He crushed it in his last game with a 1.67 FPPM.

Alan Anderson (BKL) – 0.36 to 0.51.  12 points in his last game.

Jason Maxiell (CHA) –  0.37 to 0.50 for the year.  Had an 8 point, 6 rebound game.

The “loser lift” prediction goes five for five!  Every one of these guys dramatically increased their season’s average fantasy points per minute immediately after I called them out as being in the bottom five.  Maybe they read my blog and played harder.  Or more likely, this is just another example of the common statistical phenomenon called regression to the mean.  Since this tendency for the worst to improve is fairly obscure, there are many times when people mistake it as evidence that something they did caused the improvement.

This exact situation happened a few years ago at work.  A friend was tasked with optimizing under-performing domain names for our customers.  He was pretty savvy with stats and suspected that he wasn’t doing anything useful, but every time he touched those domains, they jumped up in revenue!  One day, he forgot to make any changes, and the revenue for the names jumped up just like it always did.  The customer said “well, whatever you did worked!”  At that point, it really hit home that he could be unintentionally hurting revenue (without a random control group, how would you know?) and he stopped doing it.

I also once played a practical joke on the guys at work by identifying domain names that had very low revenue for a long time and then claiming that I was “activating” them by clicking images on each of the web pages.  When they saw the revenue increase by 400%, people were scrambling to figure out how they could scale it up and hire temps to do the clicking.  Thankfully one of them eventually said “I think Jay’s messing with us” and kept people from wasting too much time (I probably shouldn’t have punked them on a day when I was out of the office, but I thought the story was ridiculous enough that they wouldn’t fall for it). Hopefully, the joke left a lasting impression and taught everyone to be more skeptical and to request a control when faced with claims of incredible revenue increases.

Once you’re familiar with this idea that the best things tend to decline and the worst things tend to improve, you will see it everywhere.  One place I thought it would show up was in the odds for UFC fights.  A few years ago, I started an experiment and bet (fake money!) on the biggest underdog for each UFC event at mmaplayground.com.  So far, after 160 events, my play money winnings for those bets stands at +$11,417.

The reason this works is because I think this site (since it’s not concerned with making a profit on the bets) is posting what they believe are the true odds for each fight (real money sites appear to underpay for big underdogs, so please do not take this as an endorsement to gamble away all of your money!)  Since they were more likely to have underestimated the biggest underdogs and overestimated the abilities of the biggest favorites, the odds they came up with for those fighters were favorable for me.  The average money odds posted for the big underdogs was 613, which implies a winning percentage of only 14%.  The actual win probability for them was 30 / 160 = 18.8%.  This doesn’t necessarily mean that the site is being generous when it comes to posting odds for underdogs; they may have perfectly estimated the odds based on past performance.  It’s just that the worst fighters are in the same situation as our five NBA players above: probably not living up to their true abilities.

Slumping Sophomores (Regression to the Mean)

What is the “sophomore slump” and the “Sports Illustrated curse” and are they real or just superstitions?

The sophomore slump is when the top rookies in some sport usually perform worse in their second year.  Similarly, the Sports Illustrated curse is when an excellent player is recognized on the cover of SI and then suffers a decline in accomplishments soon afterwards.  It turns out that these phenomena are very real and have nothing to do with players being psychologically affected by public recognition.  You might think that players could avoid the curse if they never learn that they’re on the cover of Sports Illustrated, but it turns out that they’re pretty doomed anyway.  So what causes this?

We data wonks are very familiar with this phenomenon of Regression to the Mean and see it everywhere.  We see it when sequels to great movies don’t live up to the originals.  We think of it when people try to tell us that punishment for bad behavior works better than reward for good behavior.  We nod with understanding when we are told to rebalance our investment portfolios.  We cringe when people tell us how they made a tweak to their under-performing websites and the improvement was immediate and dramatic.

What it basically boils down to is this: those who performed badly were more likely to have had a bad day (or week or year) and those who performed well are more likely to have gotten somewhat lucky.  It seems obvious when stated like that; after all, how often were the worst performers lucky to do as well as they did?  However, the results that follow from this truth can be subtle and surprising.

If you tell a scientist that you felt like you were on your deathbed, took a pill, and then woke up the next day feeling better, he or she will not accept that as evidence that the pill worked.  They’ll say “you need a control” and a bunch of other wonky stuff that you think doesn’t matter because how clear and obvious could it be?  Well, their objection isn’t only that unless you’re actually dying, you’ll generally improve on your own, it’s also that having your worst day ever is an unusual event and is hard to repeat.  Like Seinfeld, you probably have the strange sensation that all your bad days and good days even out.  What is really happening is that your fantastic days are almost always better than the days after them and your horrible days are almost always worse.

What is less commonly known than the sophomore slump is the opposite situation: let’s call it the “losers lift.”  To demonstrate this, let’s do an experiment.  I just looked at the stats in my fantasy basketball league (shout-out to basketball.sports.ws) in an unusual way… from the bottom up.  Ranked by fantasy points per minute, as of 11/21, here are the WORST five players in the NBA right now (who’ve played at least 10 games so far)…

Mike Miller (CLE) – 0.23

Nik Stauskas (SAC) – 0.29

Will Barton (POR) – 0.35

Alan Anderson (BKL) – 0.36

Jason Maxiell (CHA) – 0.37

I predict that over the next week, most or all of these players will improve (assuming that I use my data wonk magic and give their basketball cards a pep talk, of course).  I’ll follow up in a week and let you know what happened.

In the meantime, rebalance those portfolios, lower your expectations for sequels, go ahead and reward your kids for good test scores, and for God’s sake, use a control (random untreated group) if you’re trying to determine if something works!  Also, I’m sorry to tell you that if you really enjoyed this post, the next one probably won’t be as interesting to you.