• Discover new ways to elevate your game with the updated DGCourseReview app!
    It's entirely free and enhanced with features shaped by user feedback to ensure your best experience on the course. (App Store or Google Play)

Why ratings are stupid to worry about

This doesn't require a thought experiment. The typical propagator will shoot more than 3 throws better than their rating about 1 in 6 rounds. That's also true in the opposite direction. That means 2/3 of their rounds are within +/- 3 throws. If we only have 5 props say for a league round, the odds they will all 5 shoot more than 3 throws worse than their rating is 1 in 7776 rounds. So there's a pretty good chance the variance in the SSA produced with just 5 props will be less than +/-3 from the "real" value. And of course it narrows even more with many more props. So when you see a variance of say 30 rating points between numbers that have a lot of props behind them, it's more likely there's a physical reason behind it (i.e. not the same conditions) than normal statistical variance of the props and the consistent calculation formulas that have been used over 15 years.

So then you're saying a group (no matter how large) of higher rated propagators MUST have higher score ratings in 5 of 6 rounds because they are rated higher even when given that all physical conditions are as identical as possible?
 
Statistics does not deal in absolutes ("MUST")

Translation: MUST = highly likely

Or as Chuck correctly stated "pretty good chance"
 
So then you're saying a group (no matter how large) of higher rated propagators MUST have higher score ratings in 5 of 6 rounds because they are rated higher even when given that all physical conditions are as identical as possible?
If you have say two pools of 5 props who average 6 throws apart based on their current ratings (say 1000 versus 940), the lower rated group will average the same scores or better than the 1000 rated group about 1/7776 x 1/7776 rounds. (~1 in 60 million). Essentially, all five 1000 rated players would need to shoot below 970ish and all five 940 props need to shoot above 971ish in the same round. Now, you can cherry pick five props out of 100 who shoot more than 30 points below their rating and 5 props out of 100 who shoot more than 30 points above their rating. But to pick five high and five low in advance and expect that result is about equal to winning the lottery.
 
sorry, had to go there...

lloyd.png
 
Sorry man, there are no divisions called open masters or open grandmasters. Sorry to be a stickler. The mistake to call those divisions "open" is way too common and continuously perpetuated. :wall:

I hear Am2 all of the time around here as well. I guess this should be in the 'pet peeve' thread.

Semantics. Hope you don't hurt your head too much banging against the wall. I feel silly calling it "Professional Masters", so from now on I will assume everyone knows that "Masters" is different from "Advanced Masters", or I could simply use MM1 and assume everyone knows what that mean. Nah, I think I'll keep using Open masters, and enjoy watching you hit your head on the wall.
 
I agree with you on the open thing. However calling intermediate "am 2" does not get my shorts in a bunch. pdga code for it is MA2 after all.

I can see that point, Open implies anyone can play, yet it is limited to players over 40 in that calendar year. I still won't call it Professional Masters because I am not a professional level player and this is merely a hobby and not my job. Since most MM1 players migrate from the Open division, it makes since that is why people call it open masters even though it is technically wrong.
 
There's no such thing as a fixed SSA. It's always situational as determined by the scores of the propagators. Propagators are "measuring sticks" that indicate their measurement of how difficult the course played that round by the score they shot. Some shoot better, more the same and some worse than their skill level. But on average, they shoot their rating within a standard deviation that gets smaller the more propagators are indicating their measurement by their score.

This is the part I get brain freeze on: Go to the extreme and assume there is a tournament where everyone is new, with no history at all. How are the round ratings calculated.

Or a more realistic scenario: There a handful of players in a tourney that have no history and no rating, how are their scores factored into the over all ratings for that round?
 
We have to estimate those numbers from some parameters we have available. They may be less accurate than prop generated but the players getting ratings get numbers that are good relative to each other. If they are in a new country or isolated area, it doesn't really matter if they continue playing their area with the same pool of people. Their numbers remain relevant to each other. For those that travel, their ratings will get adjusted appropriately when they play against those from a much larger pool. Then they return to play in their area and their adjustment impacts the pool of players there. That's how Europe and Japan eventually got to the same basis as the U.S. once enough intermingling occurred. But the error wasn't that great to start with because we had figured out those manual adjustments before they were more heavily involved with events.
 
The way Chuck explained the ratings calculations to me a while ago (I've lost all my emails since then) is that IN=OUT. This means that the course becomes irrelevant in the ratings calculations. The average rating of the propagators during a specific round will equal that average rating for that round. And you can take any round you'd like from an event. Take the average of the players in a round that played a certain layout and it will be pretty close to the average round rating of that round.

This means that if you take 10 players a 910,915,920,925,930,935,940,945,950,955 (Average of 933) and put them on a course. If each player happens to shoot a 50. Then that round would be a 933 rated round.

Now take the same course under the same conditions but put a different pool of players. the top 10 in the world. Lets say that their average player rating is a 1033. They happen to all shoot that same exact 50. That round would instead be rated a 1033.

Chuck will have to correct me if I'm wrong, but that's the jist of conversation we had some time ago. Like we can all assume, these exact results would never actually happen in tournament play. But I know of at least one event where the same score from an Am weekend to a Pro weekend was ~40-50 points less for an identical course/conditions. The only thing that changed was the average player rating of the propagators. The fact that the formulas used in calculating ratings allows it to happen means that there is an obvious flaw in them.

Don't recall seeing if Chuck commented on this idea.
 
We have to estimate those numbers from some parameters we have available. They may be less accurate than prop generated but the players getting ratings get numbers that are good relative to each other. If they are in a new country or isolated area, it doesn't really matter if they continue playing their area with the same pool of people. Their numbers remain relevant to each other. For those that travel, their ratings will get adjusted appropriately when they play against those from a much larger pool. Then they return to play in their area and their adjustment impacts the pool of players there. That's how Europe and Japan eventually got to the same basis as the U.S. once enough intermingling occurred. But the error wasn't that great to start with because we had figured out those manual adjustments before they were more heavily involved with events.

You make it sound very organic, lol.
 
I can see that point, Open implies anyone can play, yet it is limited to players over 40 in that calendar year. I still won't call it Professional Masters because I am not a professional level player and this is merely a hobby and not my job. Since most MM1 players migrate from the Open division, it makes since that is why people call it open masters even though it is technically wrong.

MM1 is for 40+ amateurs, who generally migrate from MA1. MPM is for 40+ players who accept cash awards (aka professionals), who generally migrate from MPO.
 
It is an organic process for and based on organic entities (players).

So the system would work perfectly if each player played against every other player in every situation, and works very well with players who play in many tournaments in a variety of situations?
 
So the system would work perfectly if each player played against every other player in every situation, and works very well with players who play in many tournaments in a variety of situations?
The system works "perfectly" now in the sense the same process is used for every round and the potential error probabilities in the outcome can be calculated. It's people applying their definition of a perfect process that results in the disconnect.
 
The system works "perfectly" now in the sense the same process is used for every round and the potential error probabilities in the outcome can be calculated. It's people applying their definition of a perfect process that results in the disconnect.

I didn't mean anything other than there would theoretically be no error with an infinite number of samples. Any system is imperfect in the sense there is a finite sample size.
 
I didn't mean anything other than there would theoretically be no error with an infinite number of samples. Any system is imperfect in the sense there is a finite sample size.
I have the feeling that even if perfect historical ratings were possible, many would still be dissatisfied that that number defined what they might be capable of doing in their next round along the lines of, "I'm a 1000 rated player trapped with a 940 history." ;)
 
MM1 is for 40+ amateurs, who generally migrate from MA1. MPM is for 40+ players who accept cash awards (aka professionals), who generally migrate from MPO.

This is a topic for another thread and another day, but I don't think the $40 cash I accepted last month placing second in a C tier event qualifies me as being a professional in any sense of the word, and there are really only about 15-20 DG'ers in the world who could legitimately be called professional. Even some of them rely on donations and the kindness of others to play the sport we all love.
 
This is a topic for another thread and another day, but I don't think the $40 cash I accepted last month placing second in a C tier event qualifies me as being a professional in any sense of the word, and there are really only about 15-20 DG'ers in the world who could legitimately be called professional. Even some of them rely on donations and the kindness of others to play the sport we all love.

I'm not sure that it is a different topic Rob. The ratings system was developed by the PDGA to create a means of restricting entries to amateur divisions, as mentioned above. Likewise, the definition of professional and amateur was also created and defined by the PDGA to restrict entries into amateur divisions. Hence, you seem to be objecting the PDGA division restrictions in both cases, or at least their method of defining them.
 
I'm not sure that it is a different topic Rob. The ratings system was developed by the PDGA to create a means of restricting entries to amateur divisions, as mentioned above. Likewise, the definition of professional and amateur was also created and defined by the PDGA to restrict entries into amateur divisions. Hence, you seem to be objecting the PDGA division restrictions in both cases, or at least their method of defining them.

I don't object at all, not sure how you came to that conclusion. It is important to prevent sandbagging, and the system does just that.

My objection is that the term professional to me means something different. I have a day job, that is my profession. I throw frisbees in the park for fun, if I happen to do well enough to get my entry fee plus a few extra bucks back, great. If not, I still had fun.
 
Top