The Winston’s Lab OWL prediction game has in my estimation been a fantastic competition (unbiased, as a key originator of the concept). This competition has allowed for the formation of an unbiased record of analyst skill from the well known to the unknown. After 180 regular season matches, there is sufficient evidence to single out a handful of competitors whose opinions are worth consideration.
In selecting players to pick out, two filters are applied. First, the analyst has a top tier APO (measure of how likely the actual outcomes were according to their predictions) and predictions do not show signs of gamesmanship. The most common evidence of this is the presence of significant number of 99%-1% predictions which makes their scores more volatile (if they are incorrect, it may take a significant sample size to definitively remove any advantage from this strategy). In the meantime, they have an successfully avoided land mines, but have not demonstrated their process will stand the test of time.
With these filters, I have selected 4 accounts to form a basis for analyzing how predictable OWL has been this season: “edenfoley” (myself), “Yiska” (known Overwatch analyst and my nemesis), “Jarjr315” and “diftol”. Due to some missing predictions between players, to give a fair comparison I will limit down to the 155 matches out of 180 regular season matches in which all four of us gave a prediction for a fair comparison.
All four of these performances are strong over these 155 games in common. The average sentiment of these four analysts (myself included) can shed some light on the predictability of the league so far through time as well as evaluate which team’s matches are more predictable.
Weeks have not been uniformly kind to analysts so far in Overwatch League. This is to be expected, since if matches were certain, then why would we assign probabilities and why would the teams sit down to play? By taking every match and averaging the predictions of the four analysts above that made a prediction (most commonly all four and very rarely less than 3), I have produced the APO (average probability of a game outcome in the week according to analyst) by week of the regular season so far. This includes all 180 regular season games, not only those 155 all analysts participated.
Like a seismograph detecting the magnitudes of earthquakes, this view detects the magnitude of surprise through the end of OWL stage 3. It is an interesting tendency which begins to emerge where midway through a stage the greatest surprises emerge. While far from becoming a general tendency we can count on, it does generate some interesting hypotheses which could explain it. Perhaps the meta which will dominate playoffs begins to become clear midway through a stage and teams begin to adapt to a single script. This might cause a momentary re-calibration of team strength as some teams improve by identifying the way forward while other teams are slower. There is also the real possibility it is pure chance!
In addition, it is interesting to consider how different teams have contributed to the predictability of league outcomes. Below, I take the predictions from the games, an average of those who made predictions, and group the APOs by whether a team participated in the game. Below shows the APOs by team.
From this, we can see that teams closer to either the top or bottom of the league across stages have contributed to the league being predictable, while teams near the middle of the pack generally have been a source of greater uncertainty.
I think that predictions by analysts on a probabilistic scale have more information they can tell us not only about the probability of future outcomes and the quality of the judgement of the analysts involved, but also about the history of the game as well. If there are any other views of the season using the predictions from the four above analysts, I’d love to produce them and we can see together what insights it could reveal.