• Discover new ways to elevate your game with the updated DGCourseReview app!
    It's entirely free and enhanced with features shaped by user feedback to ensure your best experience on the course. (App Store or Google Play)

Why ratings are stupid to worry about

You can't disprove standard deviation with a study you did using a sample of 15, I'm sorry. I fully understand the point that you are trying to make, but I don't agree in the slightest. There is no better tool in projecting future performance than past performance, and the more data, the better. That is not a matter of opinion.

Assuming all factors remain the same, you are absolutely correct. Equipment changes, techniques change and players are on various parts of the growth curve. My scores this year could be absolutely no indication of my scores last year, and hopefully no indication of my scores next year. How would you factor in amount and quality of practice, fitness and nutrition levels, etc. that could have an impact on a player's scores from round to round and year to year?
 
Interesting. What if it wasn't either or, but both. Just weight the super propagators. Even better, rather than using just 2 subsets, weight every propagator on some kind of fluid scale based on how many rounds their rating is comprised of.

Your comment proves my point. We're using more data. We don't have the luxury of the amount of data per event and player that might be more ideal. We have to produce ratings based on what we have which is limited. There's no evidence using less data would be better, so we're not doing it. The other consideration is there would rarely be enough super props when divisions play different courses and layouts in an event. We're lucky to get 5 props at all for some women, junior and older divisions sometimes. But with ratings being stated as a PDGA benefit, you make the calculations.

Chuck, above is my initial comment. When you say you are using more data, more than what? I've never suggested using a few super props was better than using all propagators, just that the super props should be weighted. You then argued based on that one study that super props player ratings are not any more accurate than a player with 8 rounds, which I can assure you is impossible.
Anyways, it was just a suggestion. I am a numbers geek so the whole thing fascinates me.
 
This thread is awesome
27604960-1434045859.jpeg
 
Chuck, above is my initial comment. When you say you are using more data, more than what? I've never suggested using a few super props was better than using all propagators, just that the super props should be weighted. You then argued based on that one study that super props player ratings are not any more accurate than a player with 8 rounds, which I can assure you is impossible.
Anyways, it was just a suggestion. I am a numbers geek so the whole thing fascinates me.
I didn't say the 80 round props ratings weren't more precise in terms of historical accuracy due to smaller std dev. However, their ability to establish current performance doesn't appear to be any better than those whose ratings are based on fewer rounds. And that may be due to older accuracy.
 
Why are ratings inversely related to score?
Good question. We chose to transform the calculations so "scratch" would be set at 1000 (versus zero in ball golf) for a few reasons: (1) It eliminated the need for decimal points, (2) It eliminated negative values, and (3) We thought it sounded cooler that world class players would have 4-digit ratings versus everyone else which meant inverting the range.
 
Disc Golf skill and results are not static and do not follow a straight line.
If results were static, and my score today would be the same as my score last year, then more results would always be better, period.
However, 80 rounds of results would come in over a period of a year, give or take. I don't know about everyone else, my game has evolved a good bit from last year. My round results from last year are not going to be a great predictor of my round results this year.
If, OTOH, I have played 8 rounds, that will be over the time frame of a month or two. My score from last week is a much better predictor of my performance this week.
That is why Chuck is saying that super propagators (80 rounds over a year) are no better than normal props (8 rounds over a month or two).
This seems fairly simple and understandable, from my limited capabilities.
 
Good question. We chose to transform the calculations so "scratch" would be set at 1000 (versus zero in ball golf) for a few reasons: (1) It eliminated the need for decimal points, (2) It eliminated negative values, and (3) We thought it sounded cooler that world class players would have 4-digit ratings versus everyone else which meant inverting the range.
Sounds like an argument about par. I think the general public better understands handicap.
 
I didn't say the 80 round props ratings weren't more precise in terms of historical accuracy due to smaller std dev. However, their ability to establish current performance doesn't appear to be any better than those whose ratings are based on fewer rounds. And that may be due to older accuracy.

You are basing you opinion on one micro study. If we used a 5,000 player sample for both 80 round super props and 5,000 8 round propagators, then recorded how many points each player deviated from their player rating each round, the super props would play SIGNIFICANTLY closer to their ratings on avg. It wouldn't even be close, and I can say that with absolute certainty.
This type of statistical analysis isn't unique to Golf. Baseball statistics, stock market predictions, horse racing. No one in their right mind would prefer two events worth of data to two years worth. We are just going to have to agree to disagree on this one.
 
Lulz. That's the right attitude to have!

It seems whether you agree or disagree with the ratings system, the title of this thread is spot on.

Just don't question it...
 
Not with that attitude, they're not.
Didn't intend to be snide. Simply that each sport has it's own scoring, stats and terminology that players eventually learn to become insiders. It doesn't matter if the general public knows what hyzer or squop* means, for example.

*tiddlywinks term.
 
Didn't intend to be snide. Simply that each sport has it's own scoring, stats and terminology that players eventually learn to become insiders. It doesn't matter if the general public knows what hyzer or squop* means, for example.

*tiddlywinks term.
I don't care too much about Prodigy, but I wish every manufacturer used their disc name system.
 
New tag line:

Play Disc Golf! Our arcane rules aren't as arcane as the arcane rules of Tiddlywinks!

Hmm, doesn't quite roll off the tongue, but it's off-putting and nonsensical, so...mission accomplished!
 
I've always thought ratings were only accurate to a +\- 25 point spread.

Pro worlds. Same conditions all week.

A pool shoots 67 at Moraine = 987
B pool shoots 67 at Moraine = 960

In 10 years of pdga events, a 67 has never been lower than 981.

A pool shoots 66 at SRU = 986
B pool shoots 66 at SRU = 977

Every round the b pool played throughout the week was rated lower than the same score for A pool.


This is why the system is flawed and should never be a determining factor as to a tournament requirement for registration or anything else.


Waiting for Chuck to chime in. Btw, your system sucks. But I'm sure you have some way to justify it.

All this means to me is that pool B is shooting an average of 2 strokes worse than their ratings at course 1, and 1 stroke worse than their rating on course 2.
 
Now that worlds went officials my rounds are a lot closer to what they should be. (As expected)

And now across A and B pool they are fixed.
Moraine 67 was 979 every day it was played.
 
Now that worlds went officials my rounds are a lot closer to what they should be. (As expected)

And now across A and B pool they are fixed.
Moraine 67 was 979 every day it was played.

Waiting for the op.......
 
You are basing you opinion on one micro study. If we used a 5,000 player sample for both 80 round super props and 5,000 8 round propagators, then recorded how many points each player deviated from their player rating each round, the super props would play SIGNIFICANTLY closer to their ratings on avg. It wouldn't even be close, and I can say that with absolute certainty.
This type of statistical analysis isn't unique to Golf. Baseball statistics, stock market predictions, horse racing. No one in their right mind would prefer two events worth of data to two years worth. We are just going to have to agree to disagree on this one.

I think you are arguing a line of what is mathematically sound against what is logistically practical. Whether your mathematical situation is more accurate is irrelevant if that practice cannot be applied at your local C-tier.

I am no math whiz, but your argument makes sense to me, but the problem is that most disc golf tournaments, your local C and B tiers are exactly that, a micro study.

I also agree with the argument about super propagators old rounds being potentially less accurate than a standard propagators recent, or maybe their only rounds.

I have enjoyed this discussion, some of the math ideas are a little over my head, but thanks for the intellectual stimulation.
 
Last edited:
Top