Autopsy Canny Crows take a giant dump on dopey dogs from up high. 63-62.

Remove this Banner Ad

Log in to remove this ad.

But surely here is where the problem lies Mof. We already had him fir a couple of years. Martin is cooked and is the only ruck we have brought in since Sweet. Surely we need 2 to 3 developing rucks so at least one has an opportunity to step up. Yet the best we have done is sign a known player who even up to last year was not trusted to develop in the seniors

On SM-G996B using BigFooty.com mobile app
I was pretty vocal about chasing Strachan and/or Max Lynch last trading period (Strachan signed just before it). We went hard for Soldo, missed out, and seemingly had no plan B. Seems most fans thought it was poor except for out list management team.
 
Obviously injuries are playing a significant role in our performances, but watching several of the other games this past weekend, it is clear that the performances that stood out were by teams whose work rate was first class.
StKilda, Fremantle and Melbourne for example, managed to get more players to contests than their opponents because they were prepared to run hard to assist their team mates.
This has been the main factor in Melbourne's recent dominance. Just watch how often and how quickly their players run both ways to support each other, create overlap and move the ball quickly out of their backline and further.
We have done this ourselves in the past but in recent times, even our acclaimed midfield have been found wanting in this regard.
In this week's game, none of Bont, Macrae and Smith has earned a vote in our Ching. While this is by no means, the clearest indicator of performance it tells us that even our best players are not working as hard as the Petracca, Oliver, Brayshaw combination at Melbourne.
Until our midfielders and flankers decide to get serious and work harder we will continue to struggle.
 
Last edited:
I was pretty vocal about chasing Strachan and/or Max Lynch last trading period (Strachan signed just before it). We went hard for Soldo, missed out, and seemingly had no plan B. Seems most fans thought it was poor except for out list management team.
After watching Lynch today we didn't miss anything there
 
The same genius who said we'd get nowhere in 2016? Yeah, a real Nostradamus he was
Apparently we also had shit depth that year. Nek minut, flags for the 1sts and reserves.

Never seen a guy make so many erroneous predictions. We all make the bad call here and there no doubt, but you could almost take it to the bank every time that the opposite of what he'd say would happen.

Not sure we've had a poorer footy judge on here.
 
Not much to say apart from being disappointed with how they’ve played this season so far, which on paper looks to be slipping away. Whilst I penned a few months back I thought they’d have a slow start to the year I didn’t expect that to involve dropping games that would have been put away last year. Not confident going forward and see some major changes needed as the team doesn’t have anywhere near the spark that it did last season.
 
I see this versatility mantra as a real

Not much to say apart from being disappointed wI th how they’ve played this season so far, which on paper looks to be slipping away. Whilst I penned a few months back I thought they’d have a slow start to the year I didn’t expect that to involve dropping games that would have been put away last year. Not confident going forward and see some major changes needed as the team doesn’t have anywhere near the spark that it did last season.
It would seem that there are a few that are a bit too pleased with last year's performance. And can't fund the intensity to go again, or that we are good enough that it will just happen.
Forgotten to being the required work rate to be successful
 
It would seem that there are a few that are a bit too pleased with last year's performance. And can't fund the intensity to go again, or that we are good enough that it will just happen.
Forgotten to being the required work rate to be successful

Or was it that last year we were lucky to get to the GF? The intensity doesnt appear to be there at the moment, even with the outs the team hasn't been consistently showing that hunger/desire on the field.
 
Or was it that last year we were lucky to get to the GF? The intensity doesnt appear to be there at the moment, even with the outs the team hasn't been consistently showing that hunger/desire on the field.
Those lucky 2 beatings we gave Port and Essendope in the finals were umpire assisted
 

(Log in to remove this ad.)

I’ve been reading about our mids and the lack of defensive mindset.
On the weekend, I spent a bit of time watching the mids and centre square contests (gave up on watching Martin in there). To me it looked as though we spent to much time holding back and watching opponents rather than watching the footy and backing ourselves. I can only assume that the players , like me, gave up on watching Martin also.
I’m not sure if Sweet has played enough footy to be ready and fear we may go with Martin again V Draper for his experience but there’s no way he’ll cover him on the ground. Sweet V Draper would surely see one of them suspended and is a little more mouth watering but still can’t see how it can change the mids mindset.

Think Gardy is going to need a super human effort to keep 2 metre Peter under control. Let’s hope he can get a few hours sleep in the upcoming days.
 
they miss less often than you think, there are academic peer reviewed publications that have validated them. they pass my eye test nearly every time with few exceptions. its a favourite line of argument on here about how they are flawed but its just not true

Thanks for that. Sorry for the late acknowledgement of it - I mentally bookmarked your post and only just got back to it.

I'll post the (free) abstract from your link here so that others don't have to go find it. I've bolded three bits that I refer to in my discussion below.

Abstract​

This study investigated the validity of the official Australian Football League Player Ratings system. It also aimed to determine the extent to which the distribution of points across the 13 rating subcategories could explain Australian Football League match outcome. Ratings were obtained for each player from Australian Football League matches played during the 2013–2016 seasons, along with the corresponding match outcome (Win/Loss and score margin). The values for each of the 13 subcategories that comprise the ratings were also obtained for the 2016 season. Total team rating scores were derived as an objective team outcome for each match. Percentage agreement and Pearson correlational analyses revealed that winning teams displayed a higher total team rating in 94.2% of matches and an association of r = 0.96 (95% confidence interval = 0.95–0.96) between match score margin and total team rating differential, respectively. A Partial Decision Tree (PART) analysis resulted in seven rules capable of determining the extent to which relative contributions of rating subcategories explain Win/Loss at an accuracy of 79.3%. These models support the validity of the Australian Football League Player Ratings system and its use as a pertinent system for objective player analyses in the Australian Football League.

Have you read the full article? I'm not prepared to pay 29 quid for it but I'd be interested in their methodology because the way it's described in the abstract it seems to be saying they validate individual players' ratings by seeing how it relates to their side winning and the margin of that win. This is not my gripe with the ratings system (FWIW I had no opinion on that aspect of it but I'd expect a high correlation).

They then go on to say "these models support the validity of the ratings system and its use as a pertinent system for objective player analyses".

What I'm interested in is the evaluation of individual players' performances, whether that be "how will did x play?" (i.e. a standalone assessment) or "who were the best players?" and "did x play a better game than y?" (i.e. a comparative analysis) regardless of who won the game.

In my view it's a bit of a leap to say it's valid as an individual player rating tool because the side with the highest aggregate rating is very likely to win. Now I've only read the abstract so maybe that's not what they are claiming. Or maybe they do indeed fully justify its use as an individual player rating system in the full paper.

Hence my interest in whether you have read the full paper yourself.

I used to follow these tables quite keenly but then after watching a few games (last year and maybe the year before) I saw some stark anomalies between the ratings and what I had watched, so I pretty much gave up on them after that. I agree they are right a lot of the time but they should only be used as a rough guide and I still believe they can be quite astray on some individual players' performances. Footy is very much a multi-dimensional game so reducing every player's performance down to a single index value was always going to be a pretty ambitious undertaking.

You mention "academic peer-reviewed publications" (plural). Do you have the names or links to any other papers?

I'm open to persuasion but I'll need the evidence.
 
Thanks for that. Sorry for the late acknowledgement of it - I mentally bookmarked your post and only just got back to it.

I'll post the (free) abstract from your link here so that others don't have to go find it. I've bolded three bits that I refer to in my discussion below.

Abstract​

This study investigated the validity of the official Australian Football League Player Ratings system. It also aimed to determine the extent to which the distribution of points across the 13 rating subcategories could explain Australian Football League match outcome. Ratings were obtained for each player from Australian Football League matches played during the 2013–2016 seasons, along with the corresponding match outcome (Win/Loss and score margin). The values for each of the 13 subcategories that comprise the ratings were also obtained for the 2016 season. Total team rating scores were derived as an objective team outcome for each match. Percentage agreement and Pearson correlational analyses revealed that winning teams displayed a higher total team rating in 94.2% of matches and an association of r = 0.96 (95% confidence interval = 0.95–0.96) between match score margin and total team rating differential, respectively. A Partial Decision Tree (PART) analysis resulted in seven rules capable of determining the extent to which relative contributions of rating subcategories explain Win/Loss at an accuracy of 79.3%. These models support the validity of the Australian Football League Player Ratings system and its use as a pertinent system for objective player analyses in the Australian Football League.

Have you read the full article? I'm not prepared to pay 29 quid for it but I'd be interested in their methodology because the way it's described in the abstract it seems to be saying they validate individual players' ratings by seeing how it relates to their side winning and the margin of that win. This is not my gripe with the ratings system (FWIW I had no opinion on that aspect of it but I'd expect a high correlation).

They then go on to say "these models support the validity of the ratings system and its use as a pertinent system for objective player analyses".

What I'm interested in is the evaluation of individual players' performances, whether that be "how will did x play?" (i.e. a standalone assessment) or "who were the best players?" and "did x play a better game than y?" (i.e. a comparative analysis) regardless of who won the game.

In my view it's a bit of a leap to say it's valid as an individual player rating tool because the side with the highest aggregate rating is very likely to win. Now I've only read the abstract so maybe that's not what they are claiming. Or maybe they do indeed fully justify its use as an individual player rating system in the full paper.

Hence my interest in whether you have read the full paper yourself.

I used to follow these tables quite keenly but then after watching a few games (last year and maybe the year before) I saw some stark anomalies between the ratings and what I had watched, so I pretty much gave up on them after that. I agree they are right a lot of the time but they should only be used as a rough guide and I still believe they can be quite astray on some individual players' performances. Footy is very much a multi-dimensional game so reducing every player's performance down to a single index value was always going to be a pretty ambitious undertaking.

You mention "academic peer-reviewed publications" (plural). Do you have the names or links to any other papers?

I'm open to persuasion but I'll need the evidence.

I haven't read it, but I see that the accepted version of the paper (i.e., identical in content if not formatting) is available from the authors' website.
 
I haven't read it, but I see that the accepted version of the paper (i.e., identical in content if not formatting) is available from the authors' website.​
Thanks. On a quick skim I can't see that it addresses the point I was making about individual player assessments. Here's their conclusion (from the link you provided). It's not particularly far-reaching. (My bolding to emphasise they are only validating the ratings system in terms of match outcomes/team performance).

Conclusion
The results from this study support the validity of the AFL Player Ratings system and its ability to
objectively assess combined player performance in AF. By utilising objective outcomes as
dependent variables, a more thorough understanding of how equity is used as a quantifiable
measure to relate to successful performance can be achieved. To further refine the generalisability
of the model produced in phase two, subsequent seasons of data could be added once they become
available. Future work should focus on the continual development of improving the ratings system
as new technologies become available, as well as the interpretation and application of the AFL
Player Ratings system for objective performance analysis and operational decision-making.
They do discuss some of the criteria and which ones are more important. For instance, ball use: kicks over 40m have a strong positive correlation and kicks that go directly to an opponent have a strong negative correlation. From that you could extrapolate that good ball users should score highly (I presume they do) but I'm none the wiser on how pressure acts, taps to advantage, spoils, etc are assessed relative to that and whether the ratings system puts the right weight on the many diverse features of the way the game is played.

I reckon Dale Morris would score pretty poorly on these player rating charts (perhaps someone could search for some 2016 charts and see if he did?) but we'd be almost unanimous here in saying he was a key player for the Dogs nearly every week. Moz only got 1 Brownlow vote in his 253 game career. I'd hope the player ratings were a little more sensitive to his value than the umpires were!

The authors do suggest that this analysis should give clubs confidence in using the ratings system with things like player contracts and list management, but as far as I could see they don't say it is a totally reliable tool for saying player x is better than player y (or played better than player y in a particular game).

Overall it feels like it's just stating the obvious. Teams that have more long kicks, more metres gained and fewer turnovers are highly likely to win.

I think we already knew that.
 
Obviously injuries are playing a significant role in our performances, but watching several of the other games this past weekend, it is clear that the performances that stood out were by teams whose work rate was first class.
StKilda, Fremantle and Melbourne for example, managed to get more players to contests than their opponents because they were prepared to run hard to assist their team mates.
This has been the main factor in Melbourne's recent dominance. Just watch how often and how quickly their players run both ways to support each other, create overlap and move the ball quickly out of their backline and further.
We have done this ourselves in the past but in recent times, even our acclaimed midfield have been found wanting in this regard.
In this week's game, none of Bont, Macrae and Smith has earned a vote in our Ching. While this is by no means, the clearest indicator of performance it tells us that even our best players are not working as hard as the Petracca, Oliver, Brayshaw combination at Melbourne.
Until our midfielders and flankers decide to get serious and work harder we will continue to struggle.
here here,

while the rest of the ground has been decimated by injury our midfield has been largely in tact. Rather than play bont, libba and macrae forward for significant portions to even it out, we should load up the midfield, increase our intensity and make sure we dominate contested ball and clearances, the rest will take care of itself if we do that
 

Remove this Banner Ad

Autopsy Canny Crows take a giant dump on dopey dogs from up high. 63-62.

Remove this Banner Ad

Back
Top