Basically the Team Ratings suggest that LFC were an excellent team in the middle four seasons under Benitez and, despite the fall off in LFCs performance in the final season, Benitez left LFC in decent shape. The decline continued in the season under Hodgson/Dalglish, before their Team Rating picked up in Dalglish’s second season.

I’ve also written about him stopping the bleeding at Chelsea following the train-wreck that was RdM’s tenure. Here’s the plots of Team Rating two seasons either side of Benitez’s tenure at Chelsea:

Chelsea were in the middle of a pretty serious decline when Benitez took over, and their performance under RdM was really astonishing. That being said Mourinho immediately took CFC back to the Premierships upper echelons in his first season.

As such, I’m fairly solidly in the camp that thinks that in the Premiership Benitez has been a perfectly fine manager at Liverpool and fairly good at Chelsea – had he moved to West Ham I’d have been damn happy. His track record in England is, however, only half of the story – he’s also spent time at Inter and Napoli in Italy.

However I didn’t have an equivalent Team Rating for Serie A, so I built one. I grabbed the data from the last eight seasons in Serie A from football-data.co.uk and built a simple Team Rating based on Goals, Shots on Target, and Total Shots*. The Team Rating is of the equation:

Team Rating = (13.6 x Goals For + 5.40 x Shots on Target For + 0.24 x Total Shots For) – (10.50 * Goals Against + 0.14 x Shots on Target Against + 0.32 x Total Shots Against)

The R^2 between Team Rating this season and Team Rating next season is 0.603, and this seasons Team Rating predicts next seasons points with a Standard Deviation of 10.40 points, and a Mean Average Error of 8.21 points (all of n = 119)**. As for the Premiership Team Rating I’ve then re-scaled the values linearly on to a 0-10 scale, with the minimum and maximum Team Ratings in the sample being assigned 0 and 10, respectively.

Onto the results. Benitez took over from Mourinho who had won Serie A in the prior two seasons, despite maybe not having the underlying numbers to do so. In addition Mourinho had also won the Coppa Italia and the Champions League in his second season. Benitez lasted half of the season, and Inter bounced immediately back to their level under Mourinho when Benitez left.

Inter fell off a cliff the last two seasons, but given the turnaround when he left I don’t think we can attribute any of that to Benitez not being there.

Finally we have Benitez’s current team – Napoli. Their Team Ratings for Benitez’s tenure, along with the two seasons prior are plotted below:

There’s not really a ton to say here. Napoli have had the 3rd highest Team Rating in Serie A in each of the past four seasons, finishing 5th, 2nd, 3rd, and 5th over that span. Whilst Napoli haven’t been able to break into the top 2, it’s also true that no team since at least ’05-06 has scored as many as Napoli did in ’13-14 (78) whilst finishing 3rd in Serie A. Also, Benitez inherited a pretty good team but has kept them playing to that level, which isn’t necessarily easy and deserves some credit in and of itself.

In all, I still think Benitez is a perfectly good manager. One who turned Liverpool into a really good team, stopped the bleeding at Chelsea, whilst maybe not improving them as much as could have been done, had a really poor spell at Inter, and that, whilst he hasn’t been able to push Napoli into the very top tier in Serie A, he has at least kept them running in place, which shouldn’t be written off. On the other hand I don’t think this piece provides evidence to suggest that he’s one of the two or three best managers in the world, which is what Madrid should be looking for.

*I would have included the more seasons but it looks like a different data provider was used before 2007-08.

**As a comparison the equivalent numbers for the Premiership Team Rating are a little better, with R^2 = 0.796, STDEV = 9.56, MAE = 7.67 (all n = 221).

]]>

First up, the counting numbers, goals, shots on target, and shots.

So in both cases the teams scored more goals and took more shots under Pardew. Both teams conceded fewer total shots without Pardew, however Pardew resulted in fewer shots on target against, and fewer goals against.

Overall was Pardew a benefit in terms of these numbers? Lets take a look the change in their differentials and ratios:

So in terms of differential and ratio the teams come out ahead of the game under Pardew in five of the six categories. Newcastle actually took a greater proportion of the shots in their games, but were beaten badly in terms of the proportion of shots that were on target. At Palace all of the number spiked in the right direction when Pardew took charge. If we break these numbers down a bit further a couple of things stick out:

Once Pardew left Newcastle something incredible happened – the 44% of opposition shots that were on target is a truly remarkable number. Newcastle actually saved a slightly larger proportion of the shots that were on target, but that number (63 %) was still woefully low.

At Palace the swing in percentages came elsewhere, with Palace scoring an above league average proportion of their shots on target, and their save percentage regressing towards the league average.

Finally lets look in terms of points and Team Rating:

The teams under Pardew scored points 59 points per 38 games, double the 29 points per 38 games they scored without Pardew in charge. This seems like a vast difference – whilst the former team is in the mix for one of the final Europa League places, the latter is in the mix for the bottom of the table. Was this difference in points warranted by the performance of the teams? Well Team Rating suggests that the performance under Pardew would warrant 52 points per 38 games, and the teams without Pardew would warrant 42 points per 38 games. To be clear, whilst the 10 point difference here isn’t the 29 point difference that we observed in reality, it is still a lot of points – and if it were repeatable it’s probably worth £25-50 million per year, even under the current TV deal.

Edit 25May15, 17:00 BST: Originally the summary paragraph to this piece was “In summary, whilst Pardew’s season wasn’t quite as incredible as the points scored by the two teams with/without him would suggest, he still had a great positive impact upon each of the teams.”, but in retrospect I don’t think that’s an accurate reflection. It’s true that both teams scored more points with him at the helm, however at Newcastle the vast majority of that was due to something (% of opponent shots that go on target) that was unsustainably bad after he left. As such, the Team Ratings are fairly similar with and without Pardew. In comparison the improvements seen at Palace are some things that were a mix of things that were sustainable (TSR), things that likely would have regressed anyway (sv%), and things that were unsustainable and we’d expect to regress towards the mean in the future (sh%). And it’s really that sustainable improvement in TSR at Palace that is the main part of the positive impact that he had.

]]>

“…then you get a grey area where some people multiply the derived figure by 1000, some by 100 or others, well, me at least, leave it as a decimal.
… Why isn’t there a standardised numerical format? |

I’ll explain quickly why I go with multiplying by 1000. It’s quicker to say and quicker for at least my mind to process – I suspect I’m not the only one. For example, let’s take a PDO of 1028.

For number x 1000: “Ten twenty eight”

For number x 100: “One oh two point eight”

For raw number: “One point oh two eight”

The argument holds for basically every value of PDO. You can still have all of the digits in whatever format you choose, the first one is just easier for my mind to process. And, as there’s never going to be a team on the x 1000 scale who could be confused for a team on the x 100 scale (i.e., teams neither register a PDO of 300 on either of the x 100 or x 1000 scales) it’s relatively intuitive to figure out what the average should be (I’ll come back to this later).

“I propose this, and I propose it with goodwill but little expectation:
(goals for divided by shots on target for) minus (goals against divided by shots on target against) AKA: (shooting % For) minus (shooting% Against) This does two things: |

Ok, positive is good, negative is bad. That’s reasonable. It’s probably easier than “something with a 1 in front of it is above average, something with a 9/8 in front of it is below average”.

“We are combining the For and Against aspect of the same metric. Just as we subtract Goals Against from Goals For to create Goal Difference, we do the same to create PDO or “Shooting% difference”.” |

I’m actually not sure what the point is here, as PDO also combines the for and against aspect of shooting %. Maybe the point is that both do this but PDO doesn’t have the positive good/negative bad aspect so this is a better method? I’m not sure.

“And it is entirely related why? The eagle-eyed will have noticed, we’ve essentially derived the same number as PDO, we’ve just decluttered it a bit. The PDO of 107 or 1070 is now defined as 0.07. A PDO of 982 is now -0.18. Average is zero.” |

First, I don’t really understand how this chimes with this, from earlier in the piece:

“I am presuming there was a comfort found in adding your team’s shooting percentage to it’s save percentage; you have built a single figure for your team and you are defining what your team is doing but I feel there is more clarity in the entirely related but subtly different:
“What is my team doing and what is the opposition doing against us?”” |

If two things are entirely related, give a number that tells you the same thing, and you intuit the same information regardless of which way it is reported then I don’t see any subtle difference. In other words you could apply the last line of reasoning from this second quote to PDO and it wouldn’t change a thing. For all we know that is how some people think about PDO right now. Am I missing something?

Second, the first quote there sounds reasonable, but it leaves out a large number of people, in my experience the vast majority, who report shooting percentage as a percentage (say 30), rather than a decimal (0.30). So now we run into exactly the problem that came before, but here there’s actually potential for it to be much worse.

Team A has a sh% of 30%, and a sv% of 70.1%.

Depending on your preference their PDO would be calculated to be 1010, 101.0, or 1.010 – however you calculate it that is relatively simple to intuit as slightly higher than average.

Using the same numbers but using the new method (your sh% minus your oppositions sh%) you’d get a value of .001 if you used decimals and 0.1 if you use percentages. The two scales overlap one another, and whilst 0.1 would be high on the decimal scale it’s essentially nothing on the percentage scale. So now though whilst we have a common centre for all values, unless we explicitly state each time that you’re using the percentage/decimal system, the data is left open to misinterpretation (guess who’s had that happen before). I should add that James suggests this by calculating as a decimal, to 2 decimal places, and if that holds then that’s great.

]]>

“In recent seasons in particular, it seems that having a run in the FA Cup has had an adverse effect on clubs’ league form. Only two of the 10 FA Cup finalists in the last five years have averaged more points per Premier League game after having made it into the fifth round – when we can reasonably start to call it a cup run – than they did before that stage. Those two teams were Wigan Athletic in 2012-13 and Portsmouth in 2009-10. Both were relegated and Portsmouth would have gone down even without their nine-point deduction.”

The link to how the article is marketed on twitter by the Guardian can be found here.

Here are the issues I have with the maths used (or not) in the article:

**1. Use of a binary scale.** The article judges teams based on the question ‘did they score fewer points/game than before’, and essentially assigns a 1/0 value based on whether the answer is yes/no, regardless of how many fewer/more points per game the team scored. It finds an 8/2 split. Which is interesting, and could well be something, but it’s not statistically significantly different from a 5/5 split (p = 0.18).

What happens if we do this more scientifically, and compare the points per game scored in the first 22 games to the points per game scored in the final 16 games (I’d argue this is a much more sensible method)? The teams score an average of 1.63 points per game in the first 22 games of the season, and an average of 1.40 points per game in the final 16 games of the season, though the result is further still from being statistically significant (p = 0.36).

**2. Regression exists.** Whilst points scored in the first 22 games of the season are highly predictive of points scored in the final 16 games of the season (R^2 = 0.80), we’d still expect to see some regression towards the mean of 1.37 points per game. In other words our best guess is that any team scoring >1.37 points per game will do worse in the final 16 games of the season, whilst any team scoring <1.37 points per game will do better in the final 16 games of the season.

What do we see? Well exactly that for 8 of the 10 teams in the sample (note 1).

Lets go a step further and use this expected regression to generate an expected points per game for each of the 10 teams in the final 16 games of their season, based on the points per game each one scored in the first 22 games of the season. We find that 8 of the 10 teams under-perform their expected points total, but when regression is taken into account the statistical significance of the observed drop in points drops further (p = 0.40).

**3. Outliers.** Almost half of the difference in the aggregate points per game scored by the teams in their first 22 games and their points per game in the last 16 games is due to just 2 of the ten teams. Arsenal in '13-14 lost Walcott, Ramsey, and Wilshere for a significant portion of the last 16 games, whilst I can't find as clear an explanation for Liverpool. This isn't suggested as a possibility in the article (though apparently "it is no coincidence that both [Arsenal and Hull] fell away in the league as they went on their cup runs)."

**4. Sample size.** In the paragraph I quoted near the beginning of the piece the author states.

"In recent seasons in particular, it seems that having a run in the FA Cup has had an adverse effect on clubs’ league form."

I think it's reasonable to read that and think that the author is suggesting that this wasn't the case in the past. And lo and behold, we only need to go back one year further than the study in the article to see that both Chelsea and Everton performed better in the final 16 games of the season than they did in the first 22. In fact, if the sample size is extended to 10 seasons (note 2) then the split of teams is 10 that performed better in the first 22 games of the season, and 9 that performed better in the last 16 games of the season (the 20th team were Swansea, who were in the Championship at the time). Obviously that is not a statistically significant result, either in terms of the number of teams that do better nor in the difference in the points per game (p = 0.61). Further, as a group those teams scored an average of 1.93 points per game in the first 22 games of the season, and 2.01 points per game over the final 16 games of the season.

Finally, the article includes the quote "Aside from clubs near the foot of the table who are fighting for their lives (and perhaps gain confidence from a Cup run), Premier League form tends to suffer from progress to the final." Amongst the teams that improved in terms of points per game in the final 16 games of the season during those extra five seasons were '08-09 CFC, '06-07 CFC, '05-06 LFC, '04-05 AFC, '04-05 MUFC.

**5. Cumulative effects?** If finalists see their points per game go down a lot I'm assuming that we're thinking that they go down at least somewhat uniformly. I.e., We'd see the largest effect on teams that reach the final, but also some effect on teams that reach the semi-final, and a smaller effect on the teams that reach the sixth round, etc. That isn't looked at or addressed in the article. I'm not going to address it here as it's a lot of work and I doubt we'd be able to discern much (if any) effect, but it would provide solid backup evidence to the original article if it were shown.

**6. Summary. **So I guess the takeaway from the original article is that in the past five seasons an FA Cup run has been bad for teams. Although the work here suggests that, if there's such a thing (and it seems fairly unlikely) it's fairly recent, because 6-10 seasons ago the teams that did well in the cup did better in the league in the last 16 games of the season. And the sample size in the original study is really small. And it's not that much bigger here. So take from it what you will, I guess.

For the record this is all stuff that is fairly easy to clean up. It took me maybe 20 minutes to do the numbers side of this post, and I’d hope the database available to the author is more user friendly than the excel sheets that I’ve fairly messily merged together over the years.

Note 1. In the past five season 57 of 100 teams have demonstrated this behaviour, which is almost exactly the same proportion as for the past 14 seasons. So eight is maybe a couple more than we'd expect to see, but this group of teams appears to feature a higher proportion of teams that score an extreme number of points than an average group of teams, so it doesn't really surprise me

Note 2. For transparency I picked 5 further seasons as it gave me a sample size of ten seasons, which is a round number (as I suspect the author of the original piece did with 5) however, lest it also look like I cherry picked I just went back and checked and in what would be the 11th season of the analysis MUFC scored more points per game in their first 22 games than the final 16 games. This, however, doesn't really change any of the conclusions in this post, unlike the effect that the sixth season would have had on the original piece.

]]>

“The conclusion from these graphs is quite simple actually. Expected Goals Ratio forms an impressive improvement on raw shot metrics at each and every point in the season. It picks up information much like the raw shot metrics do in the very early stages, then predicts future performance significantly better at early to mid-season, and also holds predictive capacities for longer. It makes sense to use Expected Goals Ratio from as early as four matches played. Even that early, it is as good a predictor for future performance as Points per Game and Goals Ratio will ever be.” |

However, I’m not sure that’s necessarily true. Below I’ve reproduced Sander’s sixth plot using my own dataset. As I don’t have data from the Eredivisie the dataset here is that from the ’12-13 and ’13-14 seasons for the big 5 leagues. I’d be happy to include the Eredivisie data if someone were to forward it. I’ve also used a simple Team Rating I made for the European leagues in place of ExG:

Now it’s more than fair to say that Team Rating (and in Sander’s case Expected Goals) has a markedly stronger correlation to the points per game that a team scores in the future than any of the other metrics. STR outperforms TSR, and by a marked amount in the middle of the season, whilst GR and TSR are essentially equivalent from games 13 onwards. However, if we’re looking at how good a metric is at predicting future points we should really be looking at the error in those predictions, where the smaller the error, the better the metric at predicting what will happen. I’ve plotted the error for each of the metrics above after each match of the season below (for methodology see note 2). For reference, if R-squared is doing a good job of outlining how well each metric is predicting future performance here are three features we’d expect to see:

1. Team Rating gets off to a flying start, with a clear lead over the other metrics after just 2-3 games

2. STR catches TSR after 4 games and comfortably surpasses it by the time 8 have been played

3. GR catchers TSR after 13 games and remains at least as predictive as TSR for the remainder of the season

So onto the plot:

What do we see? Well TSR runs the show for the first nine weeks of the season, at which point the Team Rating takes over and holds the lead until week 27, and from there the metrics become fairly interchangeable. At the ‘bad end’ GR is pretty bad for the entire season, finally joining the pack after 28-30 games. This is a pretty different story from the one we’d expect from looking at the plot of correlation coefficients, and not one of the three features outlined above is really present. I’d contend that this is a better way of assessing which metric is the best at predicting future points per game.

Does this mean that ExpG isn’t a better predictor of future points per game than TSR, STR, or GR? Without running the same study I can’t say, but I think this shows that the correlation coefficient doesn’t provide the required evidence to state that conclusively.

Finally, to read more on R-squared I suggest reading any of the following multitude of links by Phil Birnbaum (even with this many I know I’m missing some) 1, 2, 3, 4, 5, 6, 7.

Note 1: I should note that the work by me referenced in Sander’s post has specifically looked at ‘how reproducible is a metric from one season to the next‘ and focussed solely on the Premiership. Though, based on that work, I suggested that I’d use TSR more than STR for the Premiership only, whether that conclusion holds for a markedly different dataset such as this one is unknown. Furthermore, my thinking has changed somewhat in the ensuing 3-4 years and, as outlined above, I no longer think relying on R-squared is the best way to go about such a study, rather how looking at the errors produced by a metric is a better method.

Note 2: For each of the metrics (GR/TSR/STR/Team Rating) I’ve determined the relationship between the metric and the points per game each team has scored so far in the season. I’ve then used that relationship to determine the points per game we’d expect each team to score over the remainder of the season, and calculated the difference between that value and the points per game the team actually does score over the remainder of the season. The STDEV reported is that of the difference between the predicted end of season points and the actual end of season points for each team.

]]>

And in table form:

So, onto this weekend matches – what are the implications of each of the results on the teams playing the games?

If you want to skip down to a particular game the matches are in the following order

1. Hull v Crystal Palace

2. Leicester v Burnley

3. Liverpool v West Brom

4. Sunderland v Stoke

5. Swansea v Newcastle

6. Aston Villa v Man City

7. Man United v Everton

8. Chelsea v Arsenal

9. Tottenham v Southampton

10. West Ham v QPR

1. Hull v Crystal Palace

Yawn.

2. Leicester v Burnley

Burnley are in real trouble with anything other than a win.

3. Liverpool v West Brom

LFC are one of the front runners for the 3rd/4th places this season, but there are a lot of teams who aren’t all that far behind.

4. Sunderland v Stoke

Meh.

5. Swansea v Newcastle

There are a lot of most-likely meaningless games on Saturday. It gets better on Sunday.

6. Aston Villa v Man City

City didn’t lose any ground to Chelsea last week, but they didn’t gain any either. It’s another important game for them.

7. Man United v Everton

Two teams fighting for similar spots in the table mean this one has big implications. The odds of each team playing in Europe will increase >10 % with a win.

8. Chelsea v Arsenal

Both teams have faced soft opposition so far this season. Chelsea will be fairly heavy favourites, and can put a significant dent in Arsenal’s top four aspirations with a win, whilst Arsenal can seriously damage Chelsea’s advantage in the title race.

9. Tottenham v Southampton

I think this will be a fun game to watch, and the implications are suitably large. Spurs need a win to have a coin flip chance of getting into any of the European competitions next season.

The Team Ratings keep liking Southampton and they just keep racking up victories. It’s incredibly unlikely but a So’ton win coupled with losses for Chelsea and City would take the Saints up to a 7% chance of the title…

10. West Ham v QPR

Now this is a classic six-pointer. There’s only about a 25% chance that neither of these teams are relegated at the end of the seasons, and a 15% chance that both of them go.

]]>

And in table form:

So, onto this weekend matches – what are the implications of each of the results on the teams playing the games?

If you want to skip down to a particular game the matches are in the following order

1. Liverpool v Everton

2. Chelsea v Villa

3. Palace v Leicester

4. Hull v Man City

5. Man United v West Ham

6. Southampton v QPR

7. Sunderland v Swansea

8. Arsenal v Spurs

9. West Brom v Burnley

10. Stoke v Newcastle

1. Liverpool v Everton

Both teams can be safe in the knowledge that victory deals a fairly sizeable blow to their opponents European aspirations – a nice bit of added spice.

2. Chelsea v Villa

At this point Villa look very likely mid-table candidates. Chelsea on the other hand move to being fairly comfortable title favourites with a win.

3. Palace v Leicester

Has implications at the bottom but both sides are currently too safe for it to be deemed a classic six pointer

4. Hull v Man City

Massive for City. Anything other than a win is likely to give Chelsea a big lead in the title race

5. Man United v West Ham

With a loss United would be down to just a 1 in 7 chance of being the top four. They’d be above 1 in 5 with a win

6. Southampton v QPR

Probably the set of predictions that stand out most between my model and others. It loves So’ton and with a win it would have them as odds on to get a top four place. QPR on the other hand would move to coin flip territory with regards to relegation should they lose

7. Sunderland v Swansea

Yawn, really. Sunderland could do with avoiding defeat but both sides look pretty set for a mid table finish.

8. Arsenal v Spurs

Similar implications to the Merseyside derby. Massive derby

9. West Brom v Burnley

A defeat and Kenny Loggins is coming knocking, Burnley

10. Stoke v Newcastle

Yawn. Why bother? this should have been hidden away at 3 pm on Saturday rather than showcased on Monday night. There’s about a 2 in 3 chance of them both teams finishing mid-table regardless of the result

]]>

1. the variance in the number of points that teams score in a given season can be attributed to one of two factors – skill and luck.

Expressed mathematically that can be summarised as

Variance(Observed) = Variance(Skill) + Variance(Luck)

Or, to put it another way: STDEV(Observed)^2 = STDEV(Skill)^2 + STDEV(Luck)^2

2. obviously we can only predict the skill part – predicting which teams will get lucky is a fools game.

Thus, a quick way to check whether a model gives rise to a sensible set of predictions is to consider the size of the variance in the predictions compared to those in the Premiership table that can be attributed to skill alone.

But how do we know how much of the variance is due to skill and how much is due to luck? Well, we have estimates for two parts of the equation and thus can determine the third. The observed standard deviation of points in the Premiership over the past 14 seasons is ~16.6 points. Within a given season that can range from ~13 – 20 points, but on the whole ~16.6 is a pretty solid estimate. The standard deviation due to luck has been estimated a couple of ways. I’ve estimated it to be ~8.2 points (though I’m aware that is an overestimate as home advantage isn’t factored in), whilst Neil Charles has estimated it to range from 7.0 – 7.6 for individual teams. If we take those two extremes (7.0 and 8.2) and plug them into the equation along with an observed standard deviation of 16.6 points, then we get a standard deviation due to skill of 14.4 – 15.1 points.

That gives us a good benchmark. But, as I’ve shown previously, the standard deviation of points in the Premiership is rising, and in the last three seasons the standard deviation in points has been 17.9 points (that being said, it is 16.7 if we consider the past four seasons). Let’s be somewhat generous and suggest that the standard deviation of the past three seasons is a more accurate reflection of the spread of talent in the league today than the last 14 (or even four) seasons. Following the same method as in the previous paragraph that would give us a standard deviation due to skill of 15.9 – 16.5 points.

So with that conservative estimate I think we can pretty fairly say that any set of predictions that has a standard deviation in it’s point of more than 16.5 then it’s confident not only in it’s ability to predict skill, but also is trying to predict some of the variance due to luck.

Prior to the season Simon Gleave gathered a whole slew of Premiership predictions which may be found here. In there were a total of 22 predictions made using models. Below is a table showing the standard deviation in the predictions of those models, in order of most conservative to most confident. I’ve also added two benchmarks – that each team scores a league average 52 points (which obviously has a standard deviation of 0 points) and a simple model that makes a prediction for each team by regressing the number of points that the team scored last season.

Most of the models fall into the range below 16.5. I don’t know the details of all of the models so I’ll discuss the ones that I’m most familiar with.

1. It’s no surprise to see the raw TSR predictions close to the top – it does well for what it is but has very little knowledge and so is, by necessity, modest in how accurately it claims to be able to predict what will happen in the future.

2. It’s also no surprise to see the predictions based on the Team Ratings to have a higher standard deviation than TSR – as they incorporate more information.

3. The simple points based regression model gives a standard deviation of >16.5, why is that? Well it’s due to the fact that the ’13-14 season saw a very wide spread in points distribution throughout the league. In most seasons this model comes in with a standard deviation of <16.5.

Finally – two models in particular (those by Steve Lawrence and James Yorke), and maybe a couple of others, stand out from the crowd in that they seem confident in their ability to predict a significant proportion of the luck seen in the league. What does that mean? Well Phil says it best (seriously – go and read the piece, it’s excellent):

Without looking at these models I’m not sure what’s going on – but I can’t really think of a reasonable way that the standard deviation should be that high.

One last thing – it’s early in the season and I’ll update this at some point as the season goes on – but here’s a table with the current performance of the models, from most to least accurate. The final column is the standard deviation in the points predicted by each model. I’ve used conditional formatting to highlight it (green = more modest spread, red = wide spread) but it’s not hard to spot the pattern:

]]>

For analysis of predictions in prior seasons see here: 2011-12, 2012-13, 2013-14

These predictions are based on Team Ratings – the methodology of which can be found here

Finally for pre-season predictions of the points scored by the teams in the next four tiers of English football see here

Per the model, here’s the probability that a given team will win or finish in the Champions League places:

And finally, per the model, the probability that a given team will be relegated:

]]>

That being said I’m against the idea of adding a 39th game to the Premiership – with my main objection being the unbalancing of the schedule. It’s obviously patently unfair that one team may get an extra game against City whilst another gets to play against West Ham (that sentence is spoken as a West Ham fan).

With that in mind this would be my proposal:

Instead of adding a 39th game I keep the 38 game schedule ensuring that each team would continue to play each other team twice over the course of the season. However, I would have each team play 36 games in England/Wales – 18 at home & 18 away – and 2 games overseas. This proposal doesn’t unbalance the schedule and it ensures that each team has an equal number of home and away games.

The league would be split into five groups of four teams, with the teams in each group being randomly selected. Each team would play one ‘home game’ and one ‘away game’ against the other teams in its group with a schedule that looks something like:

This covers 4 of the 12 total fixtures that would be played between these two teams over a full season. Thus, in the remainder of the season (the part played in England/Wales), Team 1 would have home games against Teams 3 & 4, whilst also playing away games against Teams 2 & 3.

Potential locations could then bid to ‘host’ one of the groups. The money from the five winning bids would add to the Premier Leagues shared revenue pot, and the host city would get to keep all of the revenue generated by the four matches.

Thoughts? Alternatives?

]]>