• Discover new ways to elevate your game with the updated DGCourseReview app!
    It's entirely free and enhanced with features shaped by user feedback to ensure your best experience on the course. (App Store or Google Play)

is pdga ratings crap?

Yes, Rodney was the pioneer checking for slope in the early years of the rating system. We thought it might exist since ball golf seemed to think it was needed. Since that time, we've shown that players with established rating at all levels average their rating regardless of the course SSA ranging all the way from 41.4 to just over 70. This range is much wider than the range of course ratings in ball golf. Our much wider range of scoring data has allowed us to really see whether a slope adjustment is needed and why ball golf might not really need one if they did ratings dynamically like DG.

I disagree and I've already given examples why I don't believe this to be the case.

There are very obvious and long-lasting scoring differences between a 72.0/110 golf course and a 72.0/144 golf course. The scratch golfers will shoot the same scores on both, and the bogey golfers will shoot quite a bit worse on the 72.0/144 than the 72.0/110.

And again, the fact that PDGA ratings apply only to tournament scores is, IMO, a large failing. You require this because you require the "dynamic" ratings, but you're leaving out 95% of disc golf play, are you not?

I'm curious too how accurate the ratings are if something said earlier is true: that higher rated players in the tournament artificially boost the ratings for all participants.
 
On topic...

PDGA's system suffers from sample size and sample distribution issues. Low sample sizes can cause wild swings in round ratings. This isn't unique to the PDGA rating system, it's a fairly common problem in statistics when trying to determine a quality mean, standard deviation, or other statistical measurement (in this case, dynamic scratch scores) off of a sample set.

I haven't read into it much, but I'd wager that ball golf needs the slope concept simply because it doesn't calculate pars off of statistical samples.
 
But again, you can't determine that rating except in tournaments. I can't go out and get a rating playing a round with a friend at Deer Lakes some random Tuesday. So you can't get a rating at all.

In golf, you get a rating any time you play (if you keep a handicap).

You can do the same in DG -- all you need is a "normal conditions" SSA for your course. You can get this in a number of ways. Apps like Easy Scorecard have a growing database of these, but also let you input your own, and also calculate your handicap for you. My 9-year old daughter and I compare handicapped scores at our local course all the time, even though I'm gross beating her by 20-ish strokes.

You have to keep in mind that DG is a hippy sport, primarily developed by hippies, and in large part run by hippies. Beyond that insulting generalization, there is also no money. It would be great if we could have a method to calculate a "normal SSA" for a course, and publish that for all to have and use. But without any money, and by the very nature of DG in public parks and almost wholly unregulated, that's just not possible. And that's not even getting into the detailed logistics of course changes, multiple tees, multiple basket locations (which aren't even close to the same thing as multiple pin placements), etc.

(I actually think it's *possible* to review and approve some "normal SSA" for a course. But the organization required for that goes far beyond the priorities (if not capabilities) of the people involved in this game.)
 
Last edited:
We make no claim to determine a player's rating for casual rounds. The point is the PDGA system is designed to determine a player's performance in sanctioned events primarily for sorting Ams into competitive ratings ranges for divisional play. Ball golf blends casual, league and event rounds together so it's not the same data set for comparison.

The attached chart shows how well a pool of 4800 players at different ratings average shooting their rating on courses in different SSA ranges. Black numbers show how many rating points (not throws) better and red numbers how many rating poitns lower. Of note is the 66+ SSA category which indicates lower rated players seem to do better not worse relative to higher rated players on higher SSA courses. However, their data set is smaller than for most cells in the chart because there are few courses in the 66+ range and those are mostly played in tournaments by higher rated players.
 

Attachments

  • Ratings Players Shoot vs SSA.jpg
    Ratings Players Shoot vs SSA.jpg
    54.2 KB · Views: 41
The attached chart shows how well a pool of 4800 players at different ratings average shooting their rating on courses in different SSA ranges.

SSA range of a course says nothing about the challenges presented to the golfer.

So those are fine numbers, but they have nothing to do with slope. Maybe you weren't trying to say anything about slope?
 
My issue with the rating system is throwing away any score that is 3 std deviations or more. If you get rid of the unusually high scores your should get rid of the unusually low scores also. Just on the basis of this I feel the whole rating system is higher than what it should be.
 
Ahh but they do. If players at lower ratings could not average their ratings, then slope would be implied.
 
You can do the same in DG -- all you need is a "normal conditions" SSA for your course. You can get this in a number of ways.

I realize that, but the PDGA seems to do all it can to keep those ratings hidden.

And I still don't believe those ratings will be as accurate without inclusion of something to measure the relative changes in difficulty as ability decreases.

You have to keep in mind that DG is a hippy sport, primarily developed by hippies, and in large part run by hippies. Beyond that insulting generalization, there is also no money. It would be great if we could have a method to calculate a "normal SSA" for a course, and publish that for all to have and use. But without any money, and by the very nature of DG in public parks and almost wholly unregulated, that's just not possible. And that's not even getting into the detailed logistics of course changes, multiple tees, multiple basket locations (which aren't even close to the same thing as multiple pin placements), etc.

I agree with all of that. :) There are several more obstacles to overcome in disc golf, changing pins being chief among them (flat agreement here that it's not even close to the same thing as hole locations on a green).

(I actually think it's *possible* to review and approve some "normal SSA" for a course. But the organization required for that goes far beyond the priorities (if not capabilities) of the people involved in this game.)
So in the meantime they'll defend the system they have? :) Which is perfectly understandable...
 
My issue with the rating system is throwing away any score that is 3 std deviations or more. If you get rid of the unusually high scores your should get rid of the unusually low scores also. Just on the basis of this I feel the whole rating system is higher than what it should be.
You must throw out the abnormally low scores because players can artificially produce them. No one can artifically produce an extremely high rated round. On average 1 in 50 rounds are excluded - 2%. The average PDGA member has 16 rated rounds per year meaning one round in 3 years is not used for the average PDGA member.
 
What about ratings inflation? I keep hearing more and more about how a 1000 rated round is more like a 970. Any truth in this?
 
Juju likes to spout that but there's no proof. The number of players with 1000+ ratings is the same percentage of active rated members as it was 5 and 10 years ago. Among players above 1000 there are a few more close to 1040, so the tip of iceberg is flatter than before when just Kenny and Jesper were at the point.
 
Last edited:
SSA range of a course says nothing about the challenges presented to the golfer.

So those are fine numbers, but they have nothing to do with slope. Maybe you weren't trying to say anything about slope?

I agree.

Ahh but they do. If players at lower ratings could not average their ratings, then slope would be implied.

I'm not entirely sure you understand how slope works in golf...?

Again, except for the presence of trees on one course, imagine two 370-yard holes exist: one has no trees, and the other is tightly bordered by trees, maybe a big creek to the right.

The scratch golfer is going to average virtually the same score on both holes. The 27 handicapper is going to average much, much worse on the hole tightly bordered by trees and a creek with a 200-yard carry to reach the fairway.

Both holes would have a "hole rating" of 3.5 while the one cut through a forest next to a creek is going to have a much, much higher slope than the wide open one. Those 27 handicappers might average 4.8 on the wide open one with no trouble and 6.5 on the tight, watery one with the carry.

The trees and water are irrelevant to the scratch golfer, slightly relevant to the 9 handicapper, pretty relevant to the bogey golfer, and, well, they're pretty much all that matters to the 27 handicapper or worse.

In disc golf, both holes would have the same SSA, would they not, because the scratch golfer will average the same score on both?

Juju likes to spout that but there's no proof. The number of players with 1000+ ratings is the same percentage of active rated members as it was 5 and 10 years ago.

Yet disc golf has way more "bogey golfers" nowadays than they did then, no?

I guarantee that if I doubled the number of golfers out there, the actual number of golfers capable of playing the PGA Tour would not double. If disc golf is seeing tremendous growth, the percentage of the top rated players should tend to decrease, I believe.
 
Last edited:
I know exactly what slope means. What I'm saying is that in disc golf you can have the gnarliest shorter wooded course with a 55 SSA and a longer mostly open 55 SSA course. Players with 900 and 800 ratings will average their rating on both of those courses. Ball golf is saying a 10 and 20 handicapper will average different scores on two courses similarly disparate in terrain and length (but with the same course rating) that must be adjusted with a slope fudge factor. I agree the slope factor is probably needed, not because slope is really needed, but because the two base course ratings are incorrect in the first place. Ball golf doesn't currently have the capability to verify that one way or the other so it's just speculation.
 
i love how a .3 yr guy from ball golf is arguing with a 23.6 yr guy in charge of the ratings system for DG.

But, the conversation is giving a bit of an insight into the way ratings are made...which is pretty cool.
 
I'm not entirely sure you understand how slope works in golf...?

Again, except for the presence of trees on one course, imagine two 370-yard holes exist: one has no trees, and the other is tightly bordered by trees, maybe a big creek to the right.

The scratch golfer is going to average virtually the same score on both holes. The 27 handicapper is going to average much, much worse on the hole tightly bordered by trees and a creek with a 200-yard carry to reach the fairway.

Both holes would have a "hole rating" of 3.5 while the one cut through a forest next to a creek is going to have a much, much higher slope than the wide open one. Those 27 handicappers might average 4.8 on the wide open one with no trouble and 6.5 on the tight, watery one with the carry.

The trees and water are irrelevant to the scratch golfer, slightly relevant to the 9 handicapper, pretty relevant to the bogey golfer, and, well, they're pretty much all that matters to the 27 handicapper or worse.

In disc golf, both holes would have the same SSA, would they not, because the scratch golfer will average the same score on both?


Sorry but I have to interject here. It is tough to debate an opponent who's got over 10 years of real data at his disposal. That doesn't mean he's right, but your posts are going to have a credibility deficit unless you counter with actual data. You are speculating about what different players "might average" on a made-up course you "imagine". Instead it would be nice to see the real data.
 
What I'm saying is that in disc golf you can have the gnarliest shorter wooded course with a 55 SSA and a longer mostly open 55 SSA course.

Yes Chuck. A 55 SSA course can either be gnarly and short, or open and longer. So your chart of numbers means nothing re: slope because both those courses are in the "55" bucket.

Until you can separate types of courses and crunch the numbers, there's no way to know if slope is needed or not.

Your rating system says a 1000-rated player will average 55 on both courses, regardless the type of course. Your rating system also says an 850-rated player will average a 72 on both courses, regardless of the type of course.

That seems, to some at least, counter-intuitive. That's not to say it's true or false, but so far you've not provided any proof either.
 
Rodney, if slope existed we would still see a difference with lower ratings, lower rated players shoot on courses in each SSA range even though there's a mix of course terrains and lengths. I agree we should try and tease out the stats further by separating the types of courses like you tried to do. But whatever effect there might be, it's apparently small enough to not be statistically relevant. But we'll take a look since I think we have enough of the numbers needed to do it.
 
CAn't get bogged down by the rating, but in general, they are accurate.

They might not tell the whole story, but what can in only 3-4 characters?

Practice, learn, practice more and the rating will take care of itself.
 
But whatever effect there might be, it's apparently small enough to not be statistically relevant.

Oh I forgot that part! I agree it's probably not really relevant. I just like it as a thought exercise. And at some point a statistical data-mining exercise. Ratings do a damn fine job of what they're intended to do.
 
Top