If Heeney?

Remove this Banner Ad

That is better than the Brownlow in terms of spread and also the type of players in it, but it is still too heavily focused on midfielders. Champion Data as well need to get better at rating the forwards and defenders too.

Still, I take this list over the 2024 Brownlow list and 2024 Coaches Association list.

Changes to the ruck rules have meant there are a lot more rucks ble to grab the ball out of the ruck and get a clearance than before. So rucks have been elevated in importance and this is reflected in their seemingly inflated player ratings. Key forwards definitely do not fare well in the player ratings, but again, this reflects the way the game is played now, with teams defending key forwards as a priority to stop them dominating.

If Swans win the GF as I expect them to, the last 2 Premierships will have been won by teams without any marquee key forwards. melbourne also won the 2021 flag with workmanlike key forwards. Only the 2022 flag was won with conventional star key forwards since the introduction of the stand rule.

The game changes rules regularly and it alters the dynamic of who is effective compared to other players. The player ratings just reflect what the players actually do and how valuable it is, they don't care if you are Steven May or Charlie Cameron or who you are.
 
Would it? These are the top 10 rated players by average rating in 2024...


View attachment 2122541

That doesn't look like a lucky dip to me.

In the match you gave Mitchell "clear BOG" he had 35 disposals gaining 340m, but went at 65% efficiency. 22 of his disposals were uncontested. He had 1 goal assist, no goals and just 5 score involvements. Only 3 tackles and 16 pressure acts. A third of his disposals were "ineffective" and the 2/3rds that were effective yielded only 5 score involvements, so not very damaging.

The Player Ratings are based on what has actually taken place, not what is imagined.

Lucky Dip is hyperbole to say that it is rubbish -as my 11 examples clearly show.

Now do a stat comparison of Lachie Whitfield and Nick Bryan this season. Show me what 'actually took place' that justifies that (or literally hundreds of other ridiculous examples).
 

Log in to remove this ad.

In the match you gave Mitchell "clear BOG" he had (stats)...so not very damaging.

The Player Ratings are based on what has actually taken place, not what is imagined.

So, I was at the game and actually watched it. He was BOG. Nothing to do with my imagination (which is quite a rude thing to say)

Here is every player rated on their performance that game from the AFL website:


To save you time, here is what they said of Sam Mitchell:
Sam Mitchell – 9
On the receiving end of boos from the crowd throughout the game, Mitchell didn't let it get to him, and didn't let the Dockers get near him. The clear best on ground, he had 35 touches including 13 uncontested possessions. With tagger Ryan Crowley watching from the stands, up to half a dozen Freo players spent time on the Hawks veteran, to little effect.

Notice how it says clear best on ground.

Here are the player performances from that game as rated by Fox Sports:
https://www.foxsports.com.au/afl/af...d/news-story/3e602b02c3997f868d96805f1cd2648d

They also gave Sam Mitchell the highest rating on the ground and said he was 'brilliant'.

This is the AFL's match report on the game:

Not only does it list Mitchell as best on for the winners, it describes he (and Hodge) as the main influences behind the victory.

I'm yet to find a source that doesn't name Mitchell as best on ground.

But sure, tell me how a rating system, (that has Pittonet over 200 spots higher than Mac Andrew and 350 spots higher than Charlie Cameron this season) is what actually happened and everyone that watched it was 'imagining'.
 
How is the weighting for defensive running, I’m sure the ranking points would be heavy on forward 50 play compared to tackles or positioning. I don’t know much about it but football is too complex to base all opinions of some math equation.
 
Lucky Dip is hyperbole to say that it is rubbish -as my 11 examples clearly show.

Now do a stat comparison of Lachie Whitfield and Nick Bryan this season. Show me what 'actually took place' that justifies that (or literally hundreds of other ridiculous examples).

Your 11 examples didn't show that at all. 2024 player ratings if you order them by average player rating per game tell you exactly that, which players have been observed and recorded doing the most effective actions during matches.

What your examples did was took any no name who played one or a handful of games and happened to rate well, and compared them to big names who have been brilliant in other seasons but not as effective this season.

Nick Bryan played just 5 games, and was on average performance the 21st highest rated ruckman in the AFL. You have lept on the fact he is not a best 22 player and ruckmen have been elevated in ratings in 2024 due to the new ruck rules.

For a start, CD normally quote their highest rating players as being the x highest rated player to play 10 or more games in order to sift out some of the variance. This is sensible.

But let's look first at Whitfield, then Bryan. Whitfield is the 31st highest rated medium defender in the AFL based on average player rating in 2024, of those who played 10 or more games. Within that cohort of medium defenders there are now quite a lot of players who don't actually play in defence all the time. Nic Martin of Bombers, Sheezel, Holmes, Zorko, the list goes on, they are standing in the back half of the ground but they are not really defenders. Whitfield to some extent though is also like that. In this cohort Whitfield was actually 7th for coaches votes. So I am already a bit hesitant about whether your "deservedly AA" is correct. His player rating of 9.66 in 2024 it must be said is appreciably lower than 5 of his last 6 seasons.

So let's look at Whitfield's averages.

Disposals: 30
Disposals sans kick ins: 23.7
Disp eff: 82%(ranked 17th in med def cohort of players to play 10+)
Kick eff: 81%(12th)
Metres gained: 495(8th)
Clangers: 3.7(2nd)*higher you rank the worse you are
Disp/clanger: 8.1(outside top 40)
Turnovers: 4.9(5th)*as for clangers
Disp/turnover: 6.1(outside top 30)
Cont poss: 5.7(16th)
Uncontested poss: 18.7(3rd)
Possessions: 24.4(5th)
Cont poss%: 23.3(~ 80th)
Intercepts: 4.2(~50th)
Ground gets: 4.4(17th)
Hard gets: 0.9(~20th)
Crumbs: 1.2(~30th)
HB receive: 10.6(5th)
Clearances: 1.3(14th)
Marks: 7.1(4th)
Cont marks: 0.1(~60th)
I50: Does not register, outside top 50 odd
Intercept marks: 0.5(~80th)
Goals: 0(70+ med defenders have at least 1 goal)
Assists: 0.2(~50th)
Score involvements: 4.1(12th)
SI%: 18.2(12th)
Launches: 1.2(~30th)
Def 1 v 1 loss%: 50%(4th worst of around 130 medium defenders)
Tackles: 2.9(15th)
Pressure acts: 12.8(18th)
Spoils: 0.5(not in top 100)
Kick ins: 6.4(2nd)
1%ers: 1.1(~90th)


So he rates most strongly in metres gained(kick ins) kick efficiency, hb receives, uncontested marks, uncontested possessions, kicks ins, mainly things that wouldn't accumulate lots of value, and are less challenging.

He rates poorly in contest losses, clangers, turnovers, contested marks, spoils, goals + assists, 1%ers. these are mainly things that hurt your team. Which would be why he is relegated in the player ratings compared to what you would expect looking at his disposal count.

His highest value things would be his contested possessions and kicking efficiency, which is why he still has a reasonable player rating in 2024.

With Bryan I won't go through all his stats as it is time consuming. But his stats are unremarkable to say the least, though his disposal efficiency is probably decent for his position. What seems to be the case is rucks are accumulating ratings for hitouts and ruck contests won, but they might not be being debited for ruck contests lost. If this is the case it would explain why his rating is high despite him essentially being a losing ruckman. I have definitely noticed this with other rucks this year in particular, they seem to be being elevated beyond their station in the player ratings. I am one who has always thought it is important to have at least a competitive ruckman. A player who can compete decently in the ruck is a very valuable player, and if you don't have one you can just get run through but the better midfields. So maybe it is fair enough the rucks who are competitive are elevated over handball receive marchants like Whitfield(in a slightly down year for him.)
 
So, I was at the game and actually watched it. He was BOG. Nothing to do with my imagination (which is quite a rude thing to say)

Here is every player rated on their performance that game from the AFL website:


To save you time, here is what they said of Sam Mitchell:
Sam Mitchell – 9
On the receiving end of boos from the crowd throughout the game, Mitchell didn't let it get to him, and didn't let the Dockers get near him. The clear best on ground, he had 35 touches including 13 uncontested possessions. With tagger Ryan Crowley watching from the stands, up to half a dozen Freo players spent time on the Hawks veteran, to little effect.

Notice how it says clear best on ground.

Here are the player performances from that game as rated by Fox Sports:
https://www.foxsports.com.au/afl/af...d/news-story/3e602b02c3997f868d96805f1cd2648d

They also gave Sam Mitchell the highest rating on the ground and said he was 'brilliant'.

This is the AFL's match report on the game:

Not only does it list Mitchell as best on for the winners, it describes he (and Hodge) as the main influences behind the victory.

I'm yet to find a source that doesn't name Mitchell as best on ground.

But sure, tell me how a rating system, (that has Pittonet over 200 spots higher than Mac Andrew and 350 spots higher than Charlie Cameron this season) is what actually happened and everyone that watched it was 'imagining'.

I concede that about Mitchell being generally regarded by casual observers as BOG in that match. But this is the thing about the ratings, they do not lie. They are looking at everything a player does and weighing how effective he is. I am not saying they are infallible, but they would be a lot less fallible than you or I watching a match and decding who we thought the most valuable players were just from that.

I have explained about the rucks as best I understand it in my post above. They are rating higher than we might expect in 2024, and partly this is down to rule and interpretation changes. There are more stoppages where they do their work as the whistle is being blown quicker and they rarely jump these days, so they can win more possession in the stoppages. So it is pointless asking for further explanations about why a competitive ruck who is involved in often 50+ contests per game rates higher than some eye catching guy you saw take a couple of hangers or someone who gets goals by playing deep forward.
 
Your 11 examples didn't show that at all. 2024 player ratings if you order them by average player rating per game tell you exactly that, which players have been observed and recorded doing the most effective actions during matches.

What your examples did was took any no name who played one or a handful of games and happened to rate well, and compared them to big names who have been brilliant in other seasons but not as effective this season.

Nick Bryan played just 5 games, and was on average performance the 21st highest rated ruckman in the AFL. You have lept on the fact he is not a best 22 player and ruckmen have been elevated in ratings in 2024 due to the new ruck rules.

For a start, CD normally quote their highest rating players as being the x highest rated player to play 10 or more games in order to sift out some of the variance. This is sensible.

But let's look first at Whitfield, then Bryan. Whitfield is the 31st highest rated medium defender in the AFL based on average player rating in 2024, of those who played 10 or more games. Within that cohort of medium defenders there are now quite a lot of players who don't actually play in defence all the time. Nic Martin of Bombers, Sheezel, Holmes, Zorko, the list goes on, they are standing in the back half of the ground but they are not really defenders. Whitfield to some extent though is also like that. In this cohort Whitfield was actually 7th for coaches votes. So I am already a bit hesitant about whether your "deservedly AA" is correct. His player rating of 9.66 in 2024 it must be said is appreciably lower than 5 of his last 6 seasons.

So let's look at Whitfield's averages.

Disposals: 30
Disposals sans kick ins: 23.7
Disp eff: 82%(ranked 17th in med def cohort of players to play 10+)
Kick eff: 81%(12th)
Metres gained: 495(8th)
Clangers: 3.7(2nd)*higher you rank the worse you are
Disp/clanger: 8.1(outside top 40)
Turnovers: 4.9(5th)*as for clangers
Disp/turnover: 6.1(outside top 30)
Cont poss: 5.7(16th)
Uncontested poss: 18.7(3rd)
Possessions: 24.4(5th)
Cont poss%: 23.3(~ 80th)
Intercepts: 4.2(~50th)
Ground gets: 4.4(17th)
Hard gets: 0.9(~20th)
Crumbs: 1.2(~30th)
HB receive: 10.6(5th)
Clearances: 1.3(14th)
Marks: 7.1(4th)
Cont marks: 0.1(~60th)
I50: Does not register, outside top 50 odd
Intercept marks: 0.5(~80th)
Goals: 0(70+ med defenders have at least 1 goal)
Assists: 0.2(~50th)
Score involvements: 4.1(12th)
SI%: 18.2(12th)
Launches: 1.2(~30th)
Def 1 v 1 loss%: 50%(4th worst of around 130 medium defenders)
Tackles: 2.9(15th)
Pressure acts: 12.8(18th)
Spoils: 0.5(not in top 100)
Kick ins: 6.4(2nd)
1%ers: 1.1(~90th)


So he rates most strongly in metres gained(kick ins) kick efficiency, hb receives, uncontested marks, uncontested possessions, kicks ins, mainly things that wouldn't accumulate lots of value, and are less challenging.

He rates poorly in contest losses, clangers, turnovers, contested marks, spoils, goals + assists, 1%ers. these are mainly things that hurt your team. Which would be why he is relegated in the player ratings compared to what you would expect looking at his disposal count.

His highest value things would be his contested possessions and kicking efficiency, which is why he still has a reasonable player rating in 2024.

With Bryan I won't go through all his stats as it is time consuming. But his stats are unremarkable to say the least, though his disposal efficiency is probably decent for his position. What seems to be the case is rucks are accumulating ratings for hitouts and ruck contests won, but they might not be being debited for ruck contests lost. If this is the case it would explain why his rating is high despite him essentially being a losing ruckman. I have definitely noticed this with other rucks this year in particular, they seem to be being elevated beyond their station in the player ratings. I am one who has always thought it is important to have at least a competitive ruckman. A player who can compete decently in the ruck is a very valuable player, and if you don't have one you can just get run through but the better midfields. So maybe it is fair enough the rucks who are competitive are elevated over handball receive marchants like Whitfield(in a slightly down year for him.)

That is a lot of text to say not a lot.

Bryan is very, very, very average and only got a game due to a lack of options. It's no wonder you won't go through any of his stats because they are terrible and it is inarguable.

Whitfield was All Australian this season. He was in plenty of the suggested sides on here and there was no uproar whatsoever when he made the 22. Even if the argument is that the weightings are bad for rucks, it has Whitfield not in the top 200 players in a season where he was All Australian. That tells you the system is broken.

Even when you highlighted he was the 30 somethingth rated med defender, that's already an obvious problem. You then list the other quality med defender and ignore all the average ones the 'system' has ahead of him.

Last season, the ratings had 3 Bulldogs players in the top 6 in the entire comp. It had Trey Ruscoe as better than Max Gawn. It had Tarryn Thomas (and Trey Ruscoe) as better than Dusty Martin. It had Matt Flynn as better than Patrick Cripps.

I understand it is a mathematical weighting system but that doesn't make it good, accurate or otherwise. I can make a system that values kick ins twice as much as goals but that doesn't make it right.
 
I concede that about Mitchell being generally regarded by casual observers as BOG in that match. But this is the thing about the ratings, they do not lie. They are looking at everything a player does and weighing how effective he is. I am not saying they are infallible, but they would be a lot less fallible than you or I watching a match and decding who we thought the most valuable players were just from that.

I have explained about the rucks as best I understand it in my post above. They are rating higher than we might expect in 2024, and partly this is down to rule and interpretation changes. There are more stoppages where they do their work as the whistle is being blown quicker and they rarely jump these days, so they can win more possession in the stoppages. So it is pointless asking for further explanations about why a competitive ruck who is involved in often 50+ contests per game rates higher than some eye catching guy you saw take a couple of hangers or someone who gets goals by playing deep forward.

Mate, it's more than 'casual observers'. It's unanimous by everyone who watched the game, including everyone who is paid to report on it. Just because it is a statistical 'system' doesn't make it correct, as hundreds of examples clearly demonstrate. if everyone agreed player A is #1 on ground and the 'system' thinks he is #20, it shows a flaw in the design of the system, not that everyone's opinions are wrong or 'imagined' or 'lying'. The idea that just because it uses data makes it less infallible that the consensus of everyone who watched the game shows a lack of understanding for how data works (in this instance, humans have assigned arbitrary weightings for a number of actions in the hope of creating a metric that correctly ranks players). If anything, that is far more fallible than a bunch of experts watching something, assessing actual performance and all independently coming to the exact same conclusion.

Surely you can concede that saying I was imagining Mitchell as best on ground was uncalled for and that I may have been correct, with the "player rankings" producing one of its very common 'anomalies'.
 
Mate, it's more than 'casual observers'. It's unanimous by everyone who watched the game, including everyone who is paid to report on it. Just because it is a statistical 'system' doesn't make it correct, as hundreds of examples clearly demonstrate. if everyone agreed player A is #1 on ground and the 'system' thinks he is #20, it shows a flaw in the design of the system, not that everyone's opinions are wrong or 'imagined' or 'lying'. The idea that just because it uses data makes it less infallible that the consensus of everyone who watched the game shows a lack of understanding for how data works (in this instance, humans have assigned arbitrary weightings for a number of actions in the hope of creating a metric that correctly ranks players). If anything, that is far more fallible than a bunch of experts watching something, assessing actual performance and all independently coming to the exact same conclusion.

Surely you can concede that saying I was imagining Mitchell as best on ground was uncalled for and that I may have been correct, with the "player rankings" producing one of its very common 'anomalies'.

People paid to report on the game are no more than casual observers. They wouldn't be watching replays of every passage of play over and over to understand everything every player is doing like say coaches would or Champion Data staff are doing.

Plenty of things a casual observer can understand by watching a game once but there are also loads of things all of us won't understand or will misremember when we watch once. Player Ratings are neither correct nor incorrect because they are a statistical system. They are valuable because they are about the only ones watching properly that publishes useful guidance for footy followers.

Your hundreds of examples you think clearly demonstrate Player ratings are wrong do nothing of the sort. It is no more credible to say hundreds of people watching once and forming a view prove Player ratings wrong, than it is to say a dedicated team of people watching and recording detail others don't even notice, and taking the time to form ratings around what they can see are the most telling actions, proves all the casual observers wrong.

Surely you are aware of the moneyball story. Where "expert" professionals who watched and judged baseballers for a living were shown to be about 4th rate by people who studied data carefully and worked out what actually makes teams win games.

There is not one shadow of a doubt that the Player Ratings system is fallible. It is just that it is a lot less fallible than a bunch of know alls like us sitting around watching a game then nodding our heads knowingly about how "elite" one player is or how big a spud another guy is.

When they had Mitchell 20th highest rated player in that Preliminary Final, they would very likely be wrong. He could have been better or worse than that by a little or a lot in reality. But I would back their methods over ours any day over a large number of assessments. I am in some sort of informed position to judge this I am a former full time punter of 20 years who has been put out of commission by people who understand data a lot better than I do. And I was objectively excellent in my field, I know that because of the profits I made. So if they are beating objectively excellent judges with data, they are not going to have much trouble outmatching your ordinary run of one-watch analysts.

Tell me this....in the 2008 Grand Final, it seems quite a lot of people think Gary Ablett was the best player on the ground, due to 35 disposals etc etc. We don't have data available for score involvements. Geelong scored 34 times. How many of those scores do you think Ablett was invoved in? And he kicked 2 goals, can you recall how he got them? I ask because I happen to know the answer to those questions.
 

(Log in to remove this ad.)

People paid to report on the game are no more than casual observers. They wouldn't be watching replays of every passage of play over and over to understand everything every player is doing like say coaches would or Champion Data staff are doing.

Plenty of things a casual observer can understand by watching a game once but there are also loads of things all of us won't understand or will misremember when we watch once. Player Ratings are neither correct nor incorrect because they are a statistical system. They are valuable because they are about the only ones watching properly that publishes useful guidance for footy followers.

Your hundreds of examples you think clearly demonstrate Player ratings are wrong do nothing of the sort. It is no more credible to say hundreds of people watching once and forming a view prove Player ratings wrong, than it is to say a dedicated team of people watching and recording detail others don't even notice, and taking the time to form ratings around what they can see are the most telling actions, proves all the casual observers wrong.

Surely you are aware of the moneyball story. Where "expert" professionals who watched and judged baseballers for a living were shown to be about 4th rate by people who studied data carefully and worked out what actually makes teams win games.

There is not one shadow of a doubt that the Player Ratings system is fallible. It is just that it is a lot less fallible than a bunch of know alls like us sitting around watching a game then nodding our heads knowingly about how "elite" one player is or how big a spud another guy is.

When they had Mitchell 20th highest rated player in that Preliminary Final, they would very likely be wrong. He could have been better or worse than that by a little or a lot in reality. But I would back their methods over ours any day over a large number of assessments. I am in some sort of informed position to judge this I am a former full time punter of 20 years who has been put out of commission by people who understand data a lot better than I do. And I was objectively excellent in my field, I know that because of the profits I made. So if they are beating objectively excellent judges with data, they are not going to have much trouble outmatching your ordinary run of one-watch analysts.

Tell me this....in the 2008 Grand Final, it seems quite a lot of people think Gary Ablett was the best player on the ground, due to 35 disposals etc etc. We don't have data available for score involvements. Geelong scored 34 times. How many of those scores do you think Ablett was invoved in? And he kicked 2 goals, can you recall how he got them? I ask because I happen to know the answer to those questions.
Hoyney got you goood!
You are deep in that mathematicians hole.
 
Hoyney got you goood!
You are deep in that mathematicians hole.

As a punter, I was an old fashioned form student, who went mainly by eye, mostly by results, a lot by precedents and supported that with some fairly well targetted data analysis. As I have written, I don't think CD have a mortgage on being right. There will be holes and limitations all over their systems. It is just that all other known systems will have deeper holes and greater limitations. Most of us don't understand just how much we misunderstand things.

But when working out whose view to trust most, start with those who watch the closest.
 
People paid to report on the game are no more than casual observers. They wouldn't be watching replays of every passage of play over and over to understand everything every player is doing like say coaches would or Champion Data staff are doing.

Plenty of things a casual observer can understand by watching a game once but there are also loads of things all of us won't understand or will misremember when we watch once. Player Ratings are neither correct nor incorrect because they are a statistical system. They are valuable because they are about the only ones watching properly that publishes useful guidance for footy followers.

Your hundreds of examples you think clearly demonstrate Player ratings are wrong do nothing of the sort. It is no more credible to say hundreds of people watching once and forming a view prove Player ratings wrong, than it is to say a dedicated team of people watching and recording detail others don't even notice, and taking the time to form ratings around what they can see are the most telling actions, proves all the casual observers wrong.

Surely you are aware of the moneyball story. Where "expert" professionals who watched and judged baseballers for a living were shown to be about 4th rate by people who studied data carefully and worked out what actually makes teams win games.

There is not one shadow of a doubt that the Player Ratings system is fallible. It is just that it is a lot less fallible than a bunch of know alls like us sitting around watching a game then nodding our heads knowingly about how "elite" one player is or how big a spud another guy is.

When they had Mitchell 20th highest rated player in that Preliminary Final, they would very likely be wrong. He could have been better or worse than that by a little or a lot in reality. But I would back their methods over ours any day over a large number of assessments. I am in some sort of informed position to judge this I am a former full time punter of 20 years who has been put out of commission by people who understand data a lot better than I do. And I was objectively excellent in my field, I know that because of the profits I made. So if they are beating objectively excellent judges with data, they are not going to have much trouble outmatching your ordinary run of one-watch analysts.

Tell me this....in the 2008 Grand Final, it seems quite a lot of people think Gary Ablett was the best player on the ground, due to 35 disposals etc etc. We don't have data available for score involvements. Geelong scored 34 times. How many of those scores do you think Ablett was invoved in? And he kicked 2 goals, can you recall how he got them? I ask because I happen to know the answer to those questions.

Mate, Trey Ruscoe got 1 game last season. He was poor and immediately dropped. He was not given another game. He was delisted at the end of the year. No club, despite an expanded competition showed any interest in picking him up.

The player ratings would have us believe that that 1 game was right up amongst the top 20-30 players in the competition. It would have us believe that his performance was better than the average performance of the Brownlow medallist or the Coleman medalist or the reigning 8 time AA ruck.

Just 'trust the system' regardless of all evidence to the contrary feels a bit cult like to be honest. If all available evidence doesn't support the proposition then it should be the proposition that is challenged rather than trust that the system is right and it knows better than basically everyone/everything else (including all the other rating systems).
 
That is a lot of text to say not a lot.

Bryan is very, very, very average and only got a game due to a lack of options. It's no wonder you won't go through any of his stats because they are terrible and it is inarguable.

Whitfield was All Australian this season. He was in plenty of the suggested sides on here and there was no uproar whatsoever when he made the 22. Even if the argument is that the weightings are bad for rucks, it has Whitfield not in the top 200 players in a season where he was All Australian. That tells you the system is broken.

Even when you highlighted he was the 30 somethingth rated med defender, that's already an obvious problem. You then list the other quality med defender and ignore all the average ones the 'system' has ahead of him.

Last season, the ratings had 3 Bulldogs players in the top 6 in the entire comp. It had Trey Ruscoe as better than Max Gawn. It had Tarryn Thomas (and Trey Ruscoe) as better than Dusty Martin. It had Matt Flynn as better than Patrick Cripps.

I understand it is a mathematical weighting system but that doesn't make it good, accurate or otherwise. I can make a system that values kick ins twice as much as goals but that doesn't make it right.

I normally find you a very good and fair poster but you are focussing on all the wrong things here imo.

You are using AA selection, which is no more than the casual judgement of 9 very fallible people, not even the 9 best footy experts that can be found, to discredit Player Ratings. As if the AA panel is anything other than just 9 people who DO NOT even watch all the games giving their casual view. So Whitfield not rated in the top 30 medium defender performers this season doesn't tell me the Ratings system is broken so much as it tells me the AA system is laughable. You are quoting AA selectors as if their findings are based on forensic analysis. They are not. it doesn't even have anything near that type of authority.

Imagine you were up for murder and you knew you were innocent and you were to have your trial decided the way AA selections are decided. You would be dead set shitting yourself. I know I would be.

You are also falling into the fool's trap of using 1 game Ruscoe happened to play that produced an unusually high rating as evidence the system is broken. It would be akin to saying why was Brad Hodge dropped after he made a test 200? His 200 in that match proves the cricket scoring system is wrong because how can it not be if he averages 4 x as much in that innings as Mike Hussey does over his career. Everyone knows Mike Hussey is better right?

I know you are not that silly.

If you know Max Gawn and Dustin Martin is better than Ruscoe then join a queue of 7.5 billion people. Including CD. Ruscoe's career player rating was 6.13, which is really low. Gawn's is 14.6 which over a long career is really high. Dusty's is 14.4 which is also really high. Would you argue for example that a player getting coaches votes in one game out of 50 proves the system of coaches votes is wrong because we can see from his other 49 games he is not good enough to get coaches votes?

The fact you are relying on focussing on outliers rather than focussing on the overall picture means you are misunderstanding or misrepresenting what the player ratings is.

I will post below th top 20 Hawks by average rating in 2024. I presume you watch al the Hawks games. Now obviously we overlook the guys who only played 1 & 4 games that is going to be subject to massive variance. But look at the other 18. We know it seems to favour rucks, fairly or otherwise, compared to received judgement. So look past Meek as well. And look at the other 17. Are they in some sort of reasonable order of merit for the 2024 season, or does it just look totally random to you?

1727352780007.png
 
Mate, Trey Ruscoe got 1 game last season. He was poor and immediately dropped. He was not given another game. He was delisted at the end of the year. No club, despite an expanded competition showed any interest in picking him up.

The player ratings would have us believe that that 1 game was right up amongst the top 20-30 players in the competition. It would have us believe that his performance was better than the average performance of the Brownlow medallist or the Coleman medalist or the reigning 8 time AA ruck.

Just 'trust the system' regardless of all evidence to the contrary feels a bit cult like to be honest. If all available evidence doesn't support the proposition then it should be the proposition that is challenged rather than trust that the system is right and it knows better than basically everyone/everything else (including all the other rating systems).

Trey Ruscoe, Lol. His career player rating says delist. He got delisted. You are focussing on one single game he played and we have no idea what factors could be at play affecting that.
 
Trey Ruscoe, Lol. His career player rating says delist. He got delisted. You are focussing on one single game he played and we have no idea what factors could be at play affecting that.

Sigh.

Whitfield had an elite season. Everyone is aware of that - not just 9 fallible selectors. It wasn't an outlier selection. Just like we 'know' Dusty is better than Trey Ruscoe, we know Whitfield had a better 2024 than about 100 players ranked ahead of him (the more ridiculous ones I've highlighted). You also mentioned it was one of his worst seasons (according to the ratings) but nearly everyone considers it one of his best.

It's not just 1 game outliers either. Jesse Hogan had one of the great seasons of recent times as a key forward - was ultra damaging. The player ratings didn't have him in the top 75 players in the comp. Did the fallible selectors stuff that up too? He's miles behind a number of pretty average players. Ned Moyle is well ahead as just 1 of the examples.

Charlie Cameron has 58 goals + assists this season. That puts him top 6 in the entire comp (yes, from a couple more games from some). The player ratings have him at 330+. You should read some of the names ahead of him (some of whom their own supporters never want to see again).

They had Tarryn Thomas' average game as top 25 in the comp last year. It's not a small sample size outlier either - he played half the year (12 games). They've got him ahead of Charlie Curnow, Dusty Martin, the Brownlow medallist, etc too.

I genuinely could reel off 50 more examples.

They're just not reliable enough as an indicator - certainly not to the extent that its findings should trump the views of literally everyone and everything else.

Even using your examples from the Prelim final that started this conversation. Your arguments for why he was not damaging, I was imagining him being BOG and he was actually ranked 20th on the ground...

(we'll leave aside the fact that everyone considered him best on ground and the Freo coach tried about 7 different players to quell his influence and lamented the loss of Crowley as a dedicated tagger to stop him in the post match presser)..

1. Not enough contested ball - when he had the most contested ball of any player on the winning team
2. Poor efficiency - when he had mostly kicks and the most effective disposals of any player on the field
3. Low score involvements - when he had the equal most score involvements of any midfielder (or non-forward) on the ground

None of those stats point to 20th on the ground (especially with all the other rating systems having him top 3)
 
The thing is that the people you are railing against MR as ‘casual observers’ who just form an opinion do so not just by casually observing, they do it with all sorts of their own data at their disposal as well.

This is true but a lot of people still focus on the most familiar data that represents the most visible and easily observed actions like disposals and goals. As football followers we tend to be very lazy and haphazard with how we arrive at our judgements of players. Someone(Champion Data) comes along and does it very methodically and at least makes a genuine effort to do that following scientific principles, and hands a fair chunk of it to people freely, and we say they are wrong because they don't agree with ours and others haphazardly compiled findings.

It is laughable when you think about it.
 
This is true but a lot of people still focus on the most familiar data that represents the most visible and easily observed actions like disposals and goals. As football followers we tend to be very lazy and haphazard with how we arrive at our judgements of players. Someone(Champion Data) comes along and does it very methodically and at least makes a genuine effort to do that following scientific principles, and hands a fair chunk of it to people freely, and we say they are wrong because they don't agree with ours and others haphazardly compiled findings.

It is laughable when you think about it.

Well no it isn’t.

Champion data can’t, as far as I’m aware, differentiate, between a player taking a contested mark that is a contested mark as classification defines it, and a contested mark like the one Heeney takes against GWS when he’s putting them in his back and trying to cart them over the line, or a mark like Riewoldt’s famous one with the flight of the ball. It can’t put a value on composure and so many of the intangibles that an observer can notice when they watch a game.

The data that is USED to reach a player rating, I don’t doubt could be of value. But the conclusions that it reaches to me don’t add any more than what is reached by other means
 
Sigh.

Whitfield had an elite season. Everyone is aware of that - not just 9 fallible selectors. It wasn't an outlier selection. Just like we 'know' Dusty is better than Trey Ruscoe, we know Whitfield had a better 2024 than about 100 players ranked ahead of him (the more ridiculous ones I've highlighted). You also mentioned it was one of his worst seasons (according to the ratings) but nearly everyone considers it one of his best.

It's not just 1 game outliers either. Jesse Hogan had one of the great seasons of recent times as a key forward - was ultra damaging. The player ratings didn't have him in the top 75 players in the comp. Did the fallible selectors stuff that up too? He's miles behind a number of pretty average players. Ned Moyle is well ahead as just 1 of the examples.

Charlie Cameron has 58 goals + assists this season. That puts him top 6 in the entire comp (yes, from a couple more games from some). The player ratings have him at 330+. You should read some of the names ahead of him (some of whom their own supporters never want to see again).

They had Tarryn Thomas' average game as top 25 in the comp last year. It's not a small sample size outlier either - he played half the year (12 games). They've got him ahead of Charlie Curnow, Dusty Martin, the Brownlow medallist, etc too.

I genuinely could reel off 50 more examples.

They're just not reliable enough as an indicator - certainly not to the extent that its findings should trump the views of literally everyone and everything else.

Even using your examples from the Prelim final that started this conversation. Your arguments for why he was not damaging, I was imagining him being BOG and he was actually ranked 20th on the ground...

(we'll leave aside the fact that everyone considered him best on ground and the Freo coach tried about 7 different players to quell his influence and lamented the loss of Crowley as a dedicated tagger to stop him in the post match presser)..

1. Not enough contested ball - when he had the most contested ball of any player on the winning team
2. Poor efficiency - when he had mostly kicks and the most effective disposals of any player on the field
3. Low score involvements - when he had the equal most score involvements of any midfielder (or non-forward) on the ground

None of those stats point to 20th on the ground (especially with all the other rating systems having him top 3)

You are coming at it from the angle of wanting to find flaws. Most of the things you are describing as flaws are not verifiable by an even moderately reliable method. By that I mean references to "most people" AA selectors etc. These are just arrived at through haphazard casual judgement. And it has been known in human history for large groups of people who we might have thought should know best, to be shown to be wrong by someone who sees things differently. That may or may not be the case here.

I will give you this. From what you have presented and what else I can glean and by reference to Mitchell's player ratings in surrounding matches, it is likely he played better than his rating in this match indicated, or at least any knowledgable observer would have thought so.

There are things I am sure that Player Ratings take into account that we just can't get stats for. Like for eg how many contests a player loses could reasonably be included. We could think of more things I am sure. But you are correct, based on availble stats and witness testimony, he doesn't look anywhere near as low as 20th BOG. It would be intriguing to hear a insider actually explain how they came to rate his game so moderately. I am confident they would have a coherent reason(not necessarily one we would readily accept as correct.)

But forget driving each other mad with individual game ratings that don't appear to make sense. Have a look at the 2024 list of Hawks season average player ratings. Is it a reasonably accurate reflection of the order of merit in your opinion? I am presuming you watched all the games, I didn't.
 
Last edited:

Remove this Banner Ad

If Heeney?

Remove this Banner Ad

Back
Top