Prior to the season Simon Gleave and Constantinos Chappas collated a whole set of predictions as to how the Premiership will look come the end of the season. Basically there are two groups of predictions – ‘statisticians’ and ‘experts’. The ‘statisticians’ are those of us who have various predictive models that are used to predict the points scored by each team over the course of the season, whereas the ‘experts’ are media members who each posted the order in which they expect the Premiership to finish. Those positions were then translated to an inferred expected points total based on the average number of points scored by a team finishing 1st/2nd/3rd etc. over the past 18 Premiership seasons.
Since then I’ve been posting on twitter every week with an update as to how well these prediction models were performing against the number of points each Premiership team is on pace to score over the season. The latest version of that is re-posted below:
(A note on this – it looks slightly different to the requisite plot I posted on twitter on Monday – I’d previously awarded Southampton 3 points instead of Everton from their game this weekend. Both plots in this post have been rendered using the correct numbers.)
Basically at this point the ‘statisticians’ are trouncing the ‘experts’ – I try my best not to over state things on here but this in this case it isn’t even particularly close.
However – as Daniel Altman pointed out to me on twitter a while back, maybe this isn’t a particularly fair assessment of the ‘experts’ – given that they were predicting which positions teams would finish in, whereas the ‘statisticians’ are predicting the number of points each team will score. Given that we’re assessing all of the predictions based on the number of points teams have scored then maybe this is a measure biased in the favour of the ‘staisticians’. So given it’s the midpoint of the season I thought I’d go back and do this analysis the other way, translate the ‘statisticians’ points projections to table positions, and bias the analysis in favour of the ‘experts’. The result if I do that is shown in the plot below:
Well, as expected, the ‘experts’ receive a comparative boost when judged by this method – as a group they now perform markedly better than if they’d just assumed each team would perform exactly the same this season as last. However, despite this method not being particularly suited to the ‘statisticians’ models they’re still beating the ‘experts’ by a wide margin. To give a quick visual summary of that here’s all of the projections, ranked by how well they’re performing as measured by both the points and by league position. The ‘experts’ are in black text, with the ‘statisticians’ in red.