• Discover new ways to elevate your game with the updated DGCourseReview app!
    It's entirely free and enhanced with features shaped by user feedback to ensure your best experience on the course. (App Store or Google Play)

Paige Pierce To Discraft 2020

I think you need to brush up on your maths on this one. What sort of maths do you believe calculates what the SSA or 1000 rated score is? Do you think it's calculated from thin air?
...


Cgkdisc is working for the PDGA as the official who rates rounds. He is the most qualified person in the world when it comes to rating a round and he knows perfectly well what maths determines what a 1000 rated round is, because it is in fact *his* analysis that is determining rating
 
Let's also keep in mind, FPO is typically rated separately from MPO now which directly helps all of the FPO's ratings.
 
Cgkdisc is working for the PDGA as the official who rates rounds. He is the most qualified person in the world when it comes to rating a round and he knows perfectly well what maths determines what a 1000 rated round is, because it is in fact *his* analysis that is determining rating

A fascinating lack of understanding for someone in such a position then.
 
Cgkdisc is working for the PDGA as the official who rates rounds. He is the most qualified person in the world when it comes to rating a round and he knows perfectly well what maths determines what a 1000 rated round is, because it is in fact *his* analysis that is determining rating

yah..I get that but it's pretty clear... Two players play the same course, one guy is 700 rated, the other is 1000 rated. Let's say the 700 guy plays the same course but one hole is 10ft shorter, so then both players (divisions) can be rated separately. They shoot the same score. Guess who will have the higher rated round?
 
Yes but only 4 players with +900 rating and they played the same layout as MPO. . feels like that will make a high rating harder than the tournaments th US Women play

uh. nope. Not how ratings work.

You're also forgetting that your ratings is conglomerate formula based upon a full year's worth of events. You can "say" she never beats those people, but within the last year Sarah's finished first at the Swine in Thailand (beat Paige by 8), Nantucket (beat Cat by 6), GMC (beat Kristin by 5, Rebecca by 6, Paige by 10, and Cat by 20), and the Hall of Fame (beat Paige by 4, Cat by 9).

Pudding.

And as far as who you "See" as better. Well she;s rated 4 points above Henna and Eveliina, who are probably 8-10 years younger. Four-tenths of a stroke per round would allow you to use your on eye test to say who's better. But it is still awfully close regardless.

Sometimes these types of comments stem from the fact that a lot of guys don't want to see these women as just as good, if not better than, the high MA1 player.

One more point to add...

Generally one throw is about 10 rating points on average. So each of these ladies are less than 1 or 2 throws per round away from each other.



uh, I said that. basically.

I think you need to brush up on your maths on this one. What sort of maths do you believe calculates what the SSA or 1000 rated score is? Do you think it's calculated from thin air? It's calculated based on the scores achieved by players of certain ranks. The only way this could be done with any sort of rigor is by a weighted average or similar metric (as in one that reacts in a similar way). If the men and women play from different tees the ratings should (it may not be if it's done poorly or because of weak statistical efficacy due to small numbers of competitors) be calculated based only on the competitors that played the same course.

From Paige's point of view she wants her competitors to have increased rankings and to continue to beat them as she does currently. The rating system is not internally consistent since some players play some tournaments but not others and no player plays at a perfectly consistent level. If it turns out that her opponents play better when she's not there and then poorer when she is there then that will boost her rating.

A fascinating lack of understanding for someone in such a position then.

Jug:

You are soooo missing it that it's ridiculous. Chuck (Cgkdisc) invented the ratings system. He developed it. Back in 1998. He's not some "bureaucrat" who got hired for a job. The PDGA bought the ratings system from him. He knows how it works, even the parts of the formula that are proprietary that we don't. What you said is like saying you're gonna explain "e = mC-squared" to Einstein.
 
yah..I get that but it's pretty clear... Two players play the same course, one guy is 700 rated, the other is 1000 rated. Let's say the 700 guy plays the same course but one hole is 10ft shorter, so then both players (divisions) can be rated separately. They shoot the same score. Guess who will have the higher rated round?

Nice try with the "iteration thing"; that's my buddy Brian's favorite thing to use when arguing ratings.

They played the "same course" but one hole is "10 feet shorter"??? So they didn't really play the same course, did they? The answer to your guesstion is "we don't know". Depending on a lot of other factors, EITHER player could have the higher rating.
 
Nice try with the "iteration thing"; that's my buddy Brian's favorite thing to use when arguing ratings.

They played the "same course" but one hole is "10 feet shorter"??? So they didn't really play the same course, did they? The answer to your guesstion is "we don't know". Depending on a lot of other factors, EITHER player could have the higher rating.

Someone once told me (well...ok more than once) to play the pro divisions because you will get higher rated rounds. He also said... "ratings in / ratings out". You don't agree with that guy?

But if the only different factor was that one hole 10 ft shorter, which causes them to be rated separately, are you saying the 1000 rated player wouldn't have the higher rated round?
 
Last edited:
What sort of maths do you believe calculates what the SSA or 1000 rated score is? Do you think it's calculated from thin air?
I think this hits the nail on the head. Chuck, maybe you should send an eMail to the guy who created the ratings system and get all this cleared up. Please CC all of us on the conversation.
 
Looks like Paige and Sarah had quite big jumps in rating. . has playing so few events made it easier to make a big jump up in rating?

Sorta yea. What it really does is give more weight to the more recent rounds. If they've been playing hot for the last 3 or 4 tournaments, then their rating will have a larger jump.

For the last several years, Paige's rating has been based off of ~80 rounds. With the wintertime and COVID pauses in action, her rating this update is based on only 50 rounds. It also happens that her last few tournaments have been among the best in her career. So there are fewer <950 rounds to counteract the recent >1000 rounds.

Similar story for Hokom. Typically she has 80-90 rounds used to calculate her rating, and this time she has 41. Her recent run (specifically HoF and Samui) of 1000 rated rounds got some extra weight when 3 months of 2019 tournaments fell out of the calculation. If she keeps up the pace of her last 2 tournaments, however, her rating will fall back down once she gets more rounds included.
 
Disagree. As long as her competitors keep improving their ratings, and Paige keeps on beating them, her rating will get to 1000+. If her competitors ratings stagnate, that would make it much harder for PP to get to 1000 rated. It benefits PP immensely on her quest to 1000 for players like Hokum to be 975+ rated. She needs several FPO players to bubble up around 950-980.

Without rehashing the last page of arguments...

It's not so much about the ratings of your competitors as it is about how much you beat them by. The entire FPO field could be at 900 and Paige could hit 1000 if she beats them all by 10-11 strokes per round. Or the FPO field could be at 950 and Paige would only have to beat them by 5-6 strokes per round.

Now, if the FPO field was at 900, it would be in Paige's advantage to NOT play on 4000' pitch-and-putt courses. If the median player shoots a -10, then it's gonna be really hard for Paige to win by 10 strokes a round.
But she's a pro, and she plays on pro courses, so she doesn't have to worry about that. If her competition shoots +5 and she shoots -5, then she'll get her good rating.
 
But she's always in contention. The way I understand ratings, you could be the #1 in the world and never win an event. Just finish top 3 every time. Don't go 1st, 2nd, 47th, 88th, 108th, 1st 1st. Consistent good finished will get you there.

Please, someone, correct me if I'm mistaken.

Theoretically possible, mathematically barely possible, realistically improbable.
(all this is for MPO, sorry)
If you finished in 2nd place by 1 stroke at every Elite Series event, you'd be an incredibly good player. One of the best players in the world, for sure. Probably the best, as long as the same guy didn't consistently finish in 1st.

Unfortunately, in reality you would indeed be losing to the same guy repeatedly.

Here the winners of all elite series + major events for 2019 (MPO):
bEE6R02.jpg

For the year, the winning rating for each tournament was 1061. If you were to lose by one stroke per tournament (that is, an average of 2-3 ratings points per tournament, depending on the number of rounds), you'd have a rating of 1058. Almost good enough to beat Paul... If Paul always wins and never plays poorly, it's gonna be hard to get a higher rating by finishing in 2nd place all the time.
 
Theoretically possible, mathematically barely possible, realistically improbable.
(all this is for MPO, sorry)
If you finished in 2nd place by 1 stroke at every Elite Series event, you'd be an incredibly good player. One of the best players in the world, for sure. Probably the best, as long as the same guy didn't consistently finish in 1st.

Unfortunately, in reality you would indeed be losing to the same guy repeatedly.

Here the winners of all elite series + major events for 2019 (MPO):
bEE6R02.jpg

For the year, the winning rating for each tournament was 1061. If you were to lose by one stroke per tournament (that is, an average of 2-3 ratings points per tournament, depending on the number of rounds), you'd have a rating of 1058. Almost good enough to beat Paul... If Paul always wins and never plays poorly, it's gonna be hard to get a higher rating by finishing in 2nd place all the time.

Thanks for that breakdown! Awesome information.
 
Theoretically possible, mathematically barely possible, realistically improbable.
(all this is for MPO, sorry)
If you finished in 2nd place by 1 stroke at every Elite Series event, you'd be an incredibly good player. One of the best players in the world, for sure. Probably the best, as long as the same guy didn't consistently finish in 1st.

Unfortunately, in reality you would indeed be losing to the same guy repeatedly.

Here the winners of all elite series + major events for 2019 (MPO):
For the year, the winning rating for each tournament was 1061. If you were to lose by one stroke per tournament (that is, an average of 2-3 ratings points per tournament, depending on the number of rounds), you'd have a rating of 1058. Almost good enough to beat Paul... If Paul always wins and never plays poorly, it's gonna be hard to get a higher rating by finishing in 2nd place all the time.

FPO included this time:
JVQN4AV.jpg

Almost the exact same story as MPO. The winning rating for each tournament was 989. Losing each tournament by one stroke would give you a rating of ~987. (And before the 2020 ratings update, this would have been good enough to be the highest rated woman in the world.)
 
I'm sorry to impugn your understanding of your own rating system Chuck. I regret suggesting you don't understand your own maths well enough, however you are incorrect in disagreeing with me in the following ways:

"ELO rates players based on their results directly against each other."

My claim wasn't that the rating system looks at a direct comparison but that it follows the same principles i.e. that the change in your rating given a result depends on the difference in your ratings. So the propagators scores and ratings determine the 1000 rated round (it may not be a weighted average calculation but it is more or less related depending on how picky you want to be) and your score in comparison to that 1000 rated round gives your rating. If you are also a propagator then the difference in your ratings in comparison to the other propagators determines your round rating which in turn impacts your rating. It's the same principle it's just broken down into more steps due to the competition not being head to head so you can incorporate more data. Perhaps you disagree that that is a principle of ELO ratings. I'm perhaps considering it at a more summary level rather than the detail that you're used to.

"DG ratings are based on how well players play the course which is rated that round based on how well propagators, men or women, play the course. It does not matter what the mix of ratings of propagators or their average. The math accounts for that to determine the SSA or 1000 rated score."

It does matter in that the average is just a function of their scores and if all their scores were 100 points higher but everyone still scored the same then your round rating would be higher. Your statement suggests that the 1000 rated round is independent of the propagators current ratings, but it must be (or at least a parallel calculation impacted by the same inputs). If it's not then your weather model may be worth a lot more than what the PDGA paid you.

"It makes sense that beating higher rated players would boost your rating. But you don't need to beat higher rated players to improve your rating. You need to shoot better scores."

But given the same scores but higher rated players your rating will improve. So my suggestion about Paige preferring her opponent's ratings improve holds, you of course believe that the rating system would ideally only reflect improvements in player's skill so you'd then expect the scores (and then the round rating etc.) to reflect that change but that's not necessarily the case especially given the importance of the mental game and the differential impact of playing in bigger tournaments for a 5x world champion and the rest of the field. A rating system can only be as accurate as the consistency of the players who make up the field (well it can be far less accurate too).

You've got me interested now I will go and listen to the Nick & Matt show
 
It does matter in that the average is just a function of their scores and if all their scores were 100 points higher but everyone still scored the same then your round rating would be higher. Your statement suggests that the 1000 rated round is independent of the propagators current ratings, but it must be (or at least a parallel calculation impacted by the same inputs). If it's not then your weather model may be worth a lot more than what the PDGA paid you.
This seems to be an extreme statement. If all of their ratings were 100 points higher - they wouldn't score the same. They wouldn't be the same players. If you bring in a group of 900 rated players and ask them to go play a competition... and then you bring in a group of 1000 rated players and ask the same: they won't shoot the same under the same conditions. As stated in Goodhart's Law: "Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes." - So yes, you could purposely manipulate the system for your point, but that's outside of the point related to what happens under actual tournament conditions.
 
This seems to be an extreme statement. If all of their ratings were 100 points higher - they wouldn't score the same. They wouldn't be the same players. If you bring in a group of 900 rated players and ask them to go play a competition... and then you bring in a group of 1000 rated players and ask the same: they won't shoot the same under the same conditions. As stated in Goodhart's Law: "Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes." - So yes, you could purposely manipulate the system for your point, but that's outside of the point related to what happens under actual tournament conditions.

It's an illustrative hypothetical, the difference could be 5 points or 1 point. If the ratings of the propagators were different but the scores the same then the 1000 rating round would be a different score and therefore the given score of the individual player we're considering's rating would be different. I'm illustrating the dependency which disproves this statement "It does not matter what the mix of ratings of propagators or their average." Now, you might fairly say I'm taking it out of context at this point and that Chuck was merely stating that a direct calculation of the average or fitting of a distribution to the mix of ratings does not occur in the calculation but that wasn't how it read to me at first.

In terms of actual tournament conditions unless some parametric model is being fitted to each individual propagator against some aspects of courses that are played then the rating is only a single point estimator of the skill level of a player. It does not account for the differential effect of high pressure events, altitude, weather, distance from home, wooded/open course, elevation changes, basket types, course distance on a players score. It's perfectly possible to imagine a situation where each FPO player goes back to their local/preferred tournaments and play against a field with little crossover to the FPO field and gain or lose rating points due to different conditions that don't impact the local or regular players who may be the only candidate propagators. The FPO player's skill level may or may not change in that time but they come back to the national tour and play against Paige and that shift in their ratings impacts what Paige's rating is. Most of the time you'd expect that to be noise and basically washes out of any rating system, but if the FPO field deepens there might be an effect of bouying up the top players. Or if individuals somehow learn to play better at these smaller or local events that could help Paige's rating. Particularly if in the courses that they play together Paige ends up playing courses that favor her driving ability.

Just as an aside your statement that this is too extreme and couldn't happen in reality I would suggest it's perfectly possible if ratings between fairly distinct communities of DGers diverge sufficiently. If for example player skill levels in Estonia improve dramatically over time then their 900 rated players may well be as good as 1000 rated in the US. Similar effects are possible between regions within the US though given the cross-pollination of players and the DGPT and NT and the like this is likely to be far far smaller in effect (and it assumes a change in play styles or teaching which might not be occurring in practice). It's also possible that Chuck has additional reweighting factors to help prevent ratings drift over time and/or additional crosschecks for specific players/courses etc to bring things into balance. This sort of ratings drift is very common in rating systems that are used internationally though. Especially for smaller sports and games.
 
Just as an aside your statement that this is too extreme and couldn't happen in reality I would suggest it's perfectly possible if ratings between fairly distinct communities of DGers diverge sufficiently. If for example player skill levels in Estonia improve dramatically over time then their 900 rated players may well be as good as 1000 rated in the US. Similar effects are possible between regions within the US though given the cross-pollination of players and the DGPT and NT and the like this is likely to be far far smaller in effect (and it assumes a change in play styles or teaching which might not be occurring in practice). It's also possible that Chuck has additional reweighting factors to help prevent ratings drift over time and/or additional crosschecks for specific players/courses etc to bring things into balance. This sort of ratings drift is very common in rating systems that are used internationally though. Especially for smaller sports and games.
Don't disagree with any of the rest of it - but how big has the Estonian drift been? I remember there was an article about it. But I suppose my only response wouldn't be contrary to your points, but would more fit with behavior mitigating: in that as players in Estonia increase, their intermingling with the general population of players is likely to increase as they move about to test their skill, and bring back infusions that bring them back in line with the rest of the world. Could they remain isolated like that?

I suppose I missed some of the conversation before - but the posts I saw seemed to indicate that ELO and the Rating system were equivalent in broader terms. With regard to the specific elements you're talking about I guess I've got no real argument. There are pockets that fluctuate, but do they fluctuate enough that they cause players to have a significantly different rating from others?

For example, if we compared Albert the Bazooka Dude's expected performance due to rating coming over to the US to his actual performance - is there a significant difference? What about Kajiyama? If the fluctuations do not cause a significant difference in outcome, then is this a concern?
 
Don't disagree with any of the rest of it - but how big has the Estonian drift been? I remember there was an article about it. But I suppose my only response wouldn't be contrary to your points, but would more fit with behavior mitigating: in that as players in Estonia increase, their intermingling with the general population of players is likely to increase as they move about to test their skill, and bring back infusions that bring them back in line with the rest of the world. Could they remain isolated like that?

I suppose I missed some of the conversation before - but the posts I saw seemed to indicate that ELO and the Rating system were equivalent in broader terms. With regard to the specific elements you're talking about I guess I've got no real argument. There are pockets that fluctuate, but do they fluctuate enough that they cause players to have a significantly different rating from others?

For example, if we compared Albert the Bazooka Dude's expected performance due to rating coming over to the US to his actual performance - is there a significant difference? What about Kajiyama? If the fluctuations do not cause a significant difference in outcome, then is this a concern?

The drift would occur as a whole population of new players increase in skill together. So the rating system has no way of knowing that a whole swathe of players have improved from one tournament to the next unless the propagators are of a fairly constant skill level. If everyone in the tournament increases in skill as one then that shift will disappear from their ratings and only reappear when tested against other communities. The larger that community is the longer it will take for the smoothing out to occur through cross-pollination. It depends a little on the function used to calculate the 1000-rated round from the propagators, and the process by which ideal propagators are selected. These details are I suspect Chuck's proprietary calculations, and without them the only way of knowing how biases can occur and if they're dealt with would be to look at the data.

To create a significant drift you'd need little cross-pollination and an immature set of players, i.e. new players learning quickly.

I have no idea how big the Estonian drift is and we would only get a good indication from an event where large cross-sections of each community plays in the same tournament. An individual player playing in another tournament would, I suspect, just be seen to be playing a particularly good/bad round. I suppose there could be special weightings that will either amplify or dampen this effect, and then also back calculate what the drift may have been in other player's ratings who generated/are-dependent on that player's rating, I doubt that though since it's another level of iteration.

We can't really know whether the drift can become large or how quickly it can be smoothed out, that's somewhat dependent on the detail of Chuck's system and how it accounts for outliers or temporal drift.

Any mature community of players shouldn't drift significantly over the short term since there should be a body of players that provide a kind of ballast to the ratings.

Is it a concern? It depends on what people value in a rating system. If ratings deflate over time it could be quite disconcerting or cause people to lose confidence in the system. If in a decade no-one was reaching 1000 ratings it could cause image problems. The same if the opposite occurs, if beginners start shooting 1000 rated rounds. Again though this kind of drift may be impossible depending on how Chuck is calculating the 1000 rated rounds. If players become far better at the game than they are currently or have been and the ratings remain around the same level, will we be annoyed by that apparent inconsistency?

If players in Estonia keep finding that they're significantly under-rated it may be that they'll also be unimpressed by the system and opt for running their own rating system since it won't then make them out to be 'inferior' to the US players.

Whether or not this becomes an apparent issue would depend on consistency of players to their ratings. Pros seem able to fluctuate over at least a 100 point range during a given tournament. In that light how easy is it for an individual to identify any kind of inconsistency in the system unless it's really large. Possibly impractically large.
 
Someone once told me (well...ok more than once) to play the pro divisions because you will get higher rated rounds. He also said... "ratings in / ratings out". You don't agree with that guy?

But if the only different factor was that one hole 10 ft shorter, which causes them to be rated separately, are you saying the 1000 rated player wouldn't have the higher rated round?

If this guy said "ratings in equlals ratings out", then 'this guy' was trying to simplify it for someone. If he more accurately said, "ratings points in = ratings points out" he'd be accurate. It's a zero sum game if all the players are propagators. In either case, he's a smart guy. And playing "pro divisions" at your local events where there are different layouts for pro and am would mean he was basically saying, "there are more ratings points in to chase in those divisions."

However even that is an oversimplification and supposition. Isn't it possible at a tournament (talking your local B.C-tier) for the MA1 division to have a higher average rating for all players than the MPO division? If so, then 'this same guy' would be recommending the MA1 division because there are more ratings points there.

SO yes, in your original postulate, you stated that the "only difference was the 10-ft shorter hole", but you didn't specify any other parameters. So, a) it wasn't the same layout; b) we don't know if the shorter-holed layout played easier or harder without more info (right? - you've seen holes that play harder in the short pin than long pin before I am assuming); and c) it is just not that simple, see below comments to Mr. Jug who is telling us he understands the system better than the guy who invented it.

Sorta yea. What it really does is give more weight to the more recent rounds. If they've been playing hot for the last 3 or 4 tournaments, then their rating will have a larger jump.

For the last several years, Paige's rating has been based off of ~80 rounds. With the wintertime and COVID pauses in action, her rating this update is based on only 50 rounds. It also happens that her last few tournaments have been among the best in her career. So there are fewer <950 rounds to counteract the recent >1000 rounds.

Similar story for Hokom. Typically she has 80-90 rounds used to calculate her rating, and this time she has 41. Her recent run (specifically HoF and Samui) of 1000 rated rounds got some extra weight when 3 months of 2019 tournaments fell out of the calculation. If she keeps up the pace of her last 2 tournaments, however, her rating will fall back down once she gets more rounds included.

Very true. More recent rounds are rated higher as my source, the PDGA website, states.


It's an illustrative hypothetical, the difference could be 5 points or 1 point. If the ratings of the propagators were different but the scores the same then the 1000 rating round would be a different score and therefore the given score of the individual player we're considering's rating would be different. I'm illustrating the dependency which disproves this statement "It does not matter what the mix of ratings of propagators or their average." Now, you might fairly say I'm taking it out of context at this point and that Chuck was merely stating that a direct calculation of the average or fitting of a distribution to the mix of ratings does not occur in the calculation but that wasn't how it read to me at first.

In terms of actual tournament conditions unless some parametric model is being fitted to each individual propagator against some aspects of courses that are played then the rating is only a single point estimator of the skill level of a player. It does not account for the differential effect of high pressure events, altitude, weather, distance from home, wooded/open course, elevation changes, basket types, course distance on a players score. It's perfectly possible to imagine a situation where each FPO player goes back to their local/preferred tournaments and play against a field with little crossover to the FPO field and gain or lose rating points due to different conditions that don't impact the local or regular players who may be the only candidate propagators. The FPO player's skill level may or may not change in that time but they come back to the national tour and play against Paige and that shift in their ratings impacts what Paige's rating is. Most of the time you'd expect that to be noise and basically washes out of any rating system, but if the FPO field deepens there might be an effect of bouying up the top players. Or if individuals somehow learn to play better at these smaller or local events that could help Paige's rating. Particularly if in the courses that they play together Paige ends up playing courses that favor her driving ability.

Just as an aside your statement that this is too extreme and couldn't happen in reality I would suggest it's perfectly possible if ratings between fairly distinct communities of DGers diverge sufficiently. If for example player skill levels in Estonia improve dramatically over time then their 900 rated players may well be as good as 1000 rated in the US. Similar effects are possible between regions within the US though given the cross-pollination of players and the DGPT and NT and the like this is likely to be far far smaller in effect (and it assumes a change in play styles or teaching which might not be occurring in practice). It's also possible that Chuck has additional reweighting factors to help prevent ratings drift over time and/or additional crosschecks for specific players/courses etc to bring things into balance. This sort of ratings drift is very common in rating systems that are used internationally though. Especially for smaller sports and games.

The drift would occur as a whole population of new players increase in skill together. So the rating system has no way of knowing that a whole swathe of players have improved from one tournament to the next unless the propagators are of a fairly constant skill level. If everyone in the tournament increases in skill as one then that shift will disappear from their ratings and only reappear when tested against other communities. The larger that community is the longer it will take for the smoothing out to occur through cross-pollination. It depends a little on the function used to calculate the 1000-rated round from the propagators, and the process by which ideal propagators are selected. These details are I suspect Chuck's proprietary calculations, and without them the only way of knowing how biases can occur and if they're dealt with would be to look at the data.

To create a significant drift you'd need little cross-pollination and an immature set of players, i.e. new players learning quickly.

I have no idea how big the Estonian drift is and we would only get a good indication from an event where large cross-sections of each community plays in the same tournament. An individual player playing in another tournament would, I suspect, just be seen to be playing a particularly good/bad round. I suppose there could be special weightings that will either amplify or dampen this effect, and then also back calculate what the drift may have been in other player's ratings who generated/are-dependent on that player's rating, I doubt that though since it's another level of iteration.

We can't really know whether the drift can become large or how quickly it can be smoothed out, that's somewhat dependent on the detail of Chuck's system and how it accounts for outliers or temporal drift.

Any mature community of players shouldn't drift significantly over the short term since there should be a body of players that provide a kind of ballast to the ratings.

Is it a concern? It depends on what people value in a rating system. If ratings deflate over time it could be quite disconcerting or cause people to lose confidence in the system. If in a decade no-one was reaching 1000 ratings it could cause image problems. The same if the opposite occurs, if beginners start shooting 1000 rated rounds. Again though this kind of drift may be impossible depending on how Chuck is calculating the 1000 rated rounds. If players become far better at the game than they are currently or have been and the ratings remain around the same level, will we be annoyed by that apparent inconsistency?

If players in Estonia keep finding that they're significantly under-rated it may be that they'll also be unimpressed by the system and opt for running their own rating system since it won't then make them out to be 'inferior' to the US players.

Whether or not this becomes an apparent issue would depend on consistency of players to their ratings. Pros seem able to fluctuate over at least a 100 point range during a given tournament. In that light how easy is it for an individual to identify any kind of inconsistency in the system unless it's really large. Possibly impractically large.

Nope.
Nope. Nope.
Nope, nope, nope.

And no.

So MANY of your hypotheticals, suppositions, assumptions, etc. whatever you want to call these straw men that purport to back up your argument might work perfectly well in a criterion-referenced scoring system. PDGA ratings are NOT that. Therefore, as I continue to tell my buddy Brian and others, these things you see as "hypotheticals," or "iterations"; these things you see as "the ratings keep changing as more groups come in and their scores are entered", etc., are just figments of your imagination. A norm-referenced scoring system is based solely upon all the norms of that specific stem in that time and that place. SOLELY. ONLY. You cannot "tweak" one thing and then draw any comparatively conclusion -- because once you "say" that you do, it has completely changed what is being measured.
 

Latest posts

Top