• Discover new ways to elevate your game with the updated DGCourseReview app!
    It's entirely free and enhanced with features shaped by user feedback to ensure your best experience on the course. (App Store or Google Play)

Why ratings are stupid to worry about

chris deitzel

Double Eagle Member
Gold level trusted reviewer
Joined
Mar 22, 2008
Messages
1,100
Location
Slippery Rock, PA
I've always thought ratings were only accurate to a +\- 25 point spread.

Pro worlds. Same conditions all week.

A pool shoots 67 at Moraine = 987
B pool shoots 67 at Moraine = 960

In 10 years of pdga events, a 67 has never been lower than 981.

A pool shoots 66 at SRU = 986
B pool shoots 66 at SRU = 977

Every round the b pool played throughout the week was rated lower than the same score for A pool.


This is why the system is flawed and should never be a determining factor as to a tournament requirement for registration or anything else.


Waiting for Chuck to chime in. Btw, your system sucks. But I'm sure you have some way to justify it.
 
"A" pool has players who are on average shooting better relative to their rating that week than the "B" pool players where more are playing worse than their rating. I would be concerned if "A" pool did not get a somewhat higher unofficial rating than "B" pool for the same score. That's why they are averaged together for official ratings which come out tomorrow or maybe even later today because it brings together all players in the division for the calculations.

"A" and "B" pool is not much different from taking 72 players in the same division who just played round 1 and splitting them into two groups with one playing better and one playing worse. If the unofficial ratings for each of these groups is calculated separately, the rating for the same score will be different between the groups.
 
"A" pool has players who are on average shooting better relative to their rating that week than the "B" pool players where more are playing worse than their rating. I would be concerned if "A" pool did not get a somewhat higher unofficial rating than "B" pool for the same score. That's why they are averaged together for official ratings which come out tomorrow or maybe even later today because it brings together all players in the division for the calculations.

"A" and "B" pool is not much different from taking 72 players in the same division who just played round 1 and splitting them into two groups with one playing better and one playing worse. If the unofficial ratings for each of these groups is calculated separately, the rating for the same score will be different between the groups.

Thanks chuck.

That's what I was thinking about the pools. Also I don't think a 9 point difference at SRU is all that significant.
 
I would be concerned if "A" pool did not get a somewhat higher unofficial rating than "B" pool for the same score.

At ledgestone the opposite happened. 62 was 1038 for the C pool at Lake Eureka. 62 was 1010 for the A pool.
 
Last edited:
I don't get why people insist on comparing different rounds from different fields on the same course to each other. Its not the way ratings work, its not the intention for ratings to work that way, and there is nothing in the way they are calculated to control for that. Obviously your going to get some weird looking results and noise when you're comparing apples to oranges.
 
For Ledgestone the winds were very different between Pool B/C and Pool A. IMO they may need to be rated separately.
 
I've always thought ratings were only accurate to a +\- 25 point spread.

Pro worlds. Same conditions all week.

A pool shoots 67 at Moraine = 987
B pool shoots 67 at Moraine = 960

In 10 years of pdga events, a 67 has never been lower than 981.

A pool shoots 66 at SRU = 986
B pool shoots 66 at SRU = 977

Every round the b pool played throughout the week was rated lower than the same score for A pool.


This is why the system is flawed and should never be a determining factor as to a tournament requirement for registration or anything else.


Waiting for Chuck to chime in. Btw, your system sucks. But I'm sure you have some way to justify it.

OK, Chuck chimed in and made a lot of sense. Now we're waiting on you.
 
I don't get why people insist on comparing different rounds from different fields on the same course to each other. Its not the way ratings work, its not the intention for ratings to work that way, and there is nothing in the way they are calculated to control for that. Obviously your going to get some weird looking results and noise when you're comparing apples to oranges.

Im confused, isn't the purpose of ratings for comparison purposes? If it's apples to oranges what is the point of using ratings to determine entrance to events?
 
I don't get why people insist on comparing different rounds from different fields on the same course to each other. Its not the way ratings work, its not the intention for ratings to work that way, and there is nothing in the way they are calculated to control for that. Obviously your going to get some weird looking results and noise when you're comparing apples to oranges.

I'll go further---I don't get why people get hung up on the rating for a single round, or a handful of them.

By the time a given player averages 40 or so rounds together, with their fairly minor variances, it produces a pretty consistent number. Consistent enough to separate Ams into reasonably competitive divisions. Which is ALL it was intended for.
 
Im confused, isn't the purpose of ratings for comparison purposes? If it's apples to oranges what is the point of using ratings to determine entrance to events?

because of what mr sauls said above:


By the time a given player averages 40 or so rounds together, with their fairly minor variances, it produces a pretty consistent number. Consistent enough to separate Ams into reasonably competitive divisions

My first post was phrased somewhat poorly.
 
.....although I will agree a bit with the title. If you're worried about ratings, you've either got strange priorities, or an extremely easy life.

(Except, of course, for the folks at the PDGA who have to work with them, and TDs trying to remember to get the TD report straight).
 
because of what mr sauls said above:




My first post was phrased somewhat poorly.

I would argue that when ratings vary 30+ points depending on who is playing the tournament that those numbers shouldn't ever prefer someone who is 971 and exclude someone who is 969 from registering for a tournament.
 
I'm merely stating that they are only accurate within a +\- 20-30 point variance. Which isn't bad, but I think the system caused some issues. Tournaments where entry times are based on ratings, divisional breaks, etc.

But I do think that a person who plays enough tournaments has a fairly accurate indication of their rating.

It just seems to fluctuate too easily. And it's frustrating when someone says I just shot 1000 rated round then you realize you shot the same score last year or last week and your rating was only 970.

Really I was just bored and haven't posted in forever.

Guess I just miss the old days when we had no ratings. When you moved up a division when you felt you were ready. But I understand it from a sandbagging point of view, though I think sandbagging was rectified by other players making fun of them for winning 25 int events in a row.

Man, I need to get out and throw some discs.
 
I would argue that when ratings vary 30+ points depending on who is playing the tournament that those numbers shouldn't ever prefer someone who is 971 and exclude someone who is 969 from registering for a tournament.

I will agree that 2 ratings points is probably well within the margin for error. I don't know what that is supposed to mean about people being excluded by rating. Pro divisions only have a suggested rating. Am divisions have cutoffs that are well above where most players start playing up. And having too high a rating to play down a division is not the same thing as being excluded from playing in a tournament.
 
I will agree that 2 ratings points is probably well within the margin for error. I don't know what that is supposed to mean about people being excluded by rating. Pro divisions only have a suggested rating. Am divisions have cutoffs that are well above where most players start playing up. And having too high a rating to play down a division is not the same thing as being excluded from playing in a tournament.

Bigger tournaments are creating tiered registration based on rating for pro divisions. Mine and Chris' point is that it seems like an arbitrary cutoff. You can certainly use ratings to tell that Paul McBeth is a better player than say myself. But to use them to make a distinction between 973, 969, 965 and 967 is pretty arbitrary.
 
I think getting caught up in a course SSA used the be X is a silly argument. You guys did a ton of work on the course (including concrete tees) and the course may have played easier as it was better maintained. Or (which is my opinion), disc golfers are better than they used to be. I know that locally (Charlotte, NC for me) people are playing better and better golf on the same courses and if collectively we all shave a stroke off of our scores, the SSA goes down 1.
 
I would argue that when ratings vary 30+ points depending on who is playing the tournament that those numbers shouldn't ever prefer someone who is 971 and exclude someone who is 969 from registering for a tournament.

If you can't understand the concept that the same course can play harder or easier from one tournament to the next then we are at an impasse that will keep this discussion from going anywhere.
 
Top