• Discover new ways to elevate your game with the updated DGCourseReview app!
    It's entirely free and enhanced with features shaped by user feedback to ensure your best experience on the course. (App Store or Google Play)

[noob question] Are ratings rated equally?

DiscFifty

Banned
Joined
Sep 2, 2012
Messages
4,784
Are PDGA ratings rated equally all around? For instance..If an open player and a rec player both shoot the same score, will they both have the same rating for that round? If the answer is yes, then it's logical to think regardless what division they play in, if there rating is similar to yours you should be competitive with them. :confused:
 
Yes, both would get the same round rating for playing the same score, at least if it's under the same tournament. However, it's not quite the same as player rating, which is a weighed average based on your last eight(?) rounds.
 
Yes, both would get the same round rating for playing the same score, at least if it's under the same tournament. However, it's not quite the same as player rating, which is a weighed average based on your last eight(?) rounds.

Player rating is the average of ALL of one's rounds going back one calendar year from their most recently rated round (not limited to 8 rounds).

And yes, the intent of player ratings is to group players of similar ratings into competitive divisions. If players of disparate player ratings (like say an 990 rated Open player and a 870 Rec player) happen to shoot the same score on the same course in the same round, they'll receive the same rating for that round. But the likelihood that they could repeat that in subsequent rounds is low, since either the Open player had a bad/below-average round or the Rec player had a good/above-average round or it was some combination therein. The expectation would be a regression to the mean...the Open player would likely shoot better, the Rec player would likely shoot worse, i.e. closer to their average.
 
The interesting part is that if you shoot the same score at the same course but in different tournaments your rating will be different despite possibly being the same skill.

My complaint about ratings has always been that they are not reflective regionally. If you regularly play with a bunch of scrubs your rating will be much higher than if you play with highly competent players.
 
The interesting part is that if you shoot the same score at the same course but in different tournaments your rating will be different despite possibly being the same skill.

My complaint about ratings has always been that they are not reflective regionally. If you regularly play with a bunch of scrubs your rating will be much higher than if you play with highly competent players.

[citation needed]
 
The interesting part is that if you shoot the same score at the same course but in different tournaments your rating will be different despite possibly being the same skill.

My complaint about ratings has always been that they are not reflective regionally. If you regularly play with a bunch of scrubs your rating will be much higher than if you play with highly competent players.

The first part is true, the second doubtful.

The issue with the first part reflects that (1) different days are not identical. Weather, foliage, and other factors change. (2) The difference is generally minimal. (3) Round ratings were never intended to be exact. Averaged together, they generate pretty good player ratings, allowing amateurs to be grouped by skill level. A 10-point discrepancy in a round rating would have a fraction of a point effect on a player rating for someone who plays once a month.
 
The interesting part is that if you shoot the same score at the same course but in different tournaments your rating will be different despite possibly being the same skill.

My complaint about ratings has always been that they are not reflective regionally. If you regularly play with a bunch of scrubs your rating will be much higher than if you play with highly competent players.

Your second paragraph is exactly the opposite of the argument many people try to make about ratings, they claim that playing with higher rated players boosts ratings rather than the other way around. Either way the math doesn't really back that up, if you play at the same skill level, you'll average out to a pretty similar rating no matter who you play against. Check out Darrell Nodland in North Dakota for an example, he maintains a really high rating while never playing against other players rated anywhere near him.

You could make the argument that some people play to their competition, and don't throw as well when they don't have better players to compare themselves to, but that has nothing to do with the ratings system.
 
The interesting part is that if you shoot the same score at the same course but in different tournaments your rating will be different despite possibly being the same skill.

Different by a few points, yes. And that is mainly depending on the conditions and the people you played against in said event.
 
Does your rating average vary on different types of courses?
 
The interesting part is that if you shoot the same score at the same course but in different tournaments your rating will be different despite possibly being the same skill.
At tournament A its a nice sunny calm day. At tournament B, its 30 MPH wind and rain. You shoot X in both tournaments. I'd say one feat is a little more accomplished than the other.

Round ratings are based how you did against the competition playing that layout that day, not against the course everyday, (which itself may not be the same everyday if you use different tees or pins).
 
I played a one round pdga at cedar hills, the pins were in the gold position and for the first time ever in a pdga hole 10 was in the long position(add one stroke), I shot a 48. A 48 from the short position in any tourney I could find on pdga.com was rated from 1025-1040. My round was rated 1006, then corrected to 1005. To this day I can't explain it.
 
I played a one round pdga at cedar hills, the pins were in the gold position and for the first time ever in a pdga hole 10 was in the long position(add one stroke), I shot a 48. A 48 from the short position in any tourney I could find on pdga.com was rated from 1025-1040. My round was rated 1006, then corrected to 1005. To this day I can't explain it.

In my estimation the golds are at least 4 strokes harder than the shorts.
 
I can attest to it being significantly harder in gold, in relation to Mattalica's post. I believe the event he speaks of is the only one ever played in the newest/hardest layout that was rated.
 
Ratings on a layout over time are a good way to indicate the actual difficulty versus what players may think is the difference between two layouts. It's becoming clear since leagues started how much tournament pressure impacts ratings. A one round event (and league night) is most likely going to have less pressure than a tournament with 2 or more rounds being played because you won't have the eventual top 4 in each division in one group unless the groups were specifically created based on ratings in the first place.
 
Ratings on a layout over time are a good way to indicate the actual difficulty versus what players may think is the difference between two layouts. It's becoming clear since leagues started how much tournament pressure impacts ratings. A one round event (and league night) is most likely going to have less pressure than a tournament with 2 or more rounds being played because you won't have the eventual top 4 in each division in one group unless the groups were specifically created based on ratings in the first place.

So that accounts for more than 8 strokes?
 
I would be willing to bet, even money, that distance of the pins moving back into a less accessible area account for more strokes than pressure. But then again pressure only has negative affects right? :doh:
 
The biggest misconception in ratings, IMO, is that the course matters at all. The formula isn't designed to measure the course, so comparing courses (or layouts of a single course) based on ratings, particularly only a single round's worth of ratings, is flawed from the jump.
 
I dunno. The ratings give a pretty good assessment of the course for that round. And on the courses I'm most familiar with, when the layouts don't change much and the weather isn't extreme, the ratings have been pretty consistent from year-to-year, within about a 1-throw range. They give a pretty good measure of the difficulty of the courses.

I think the biggest misconceptions are that individual round ratings are precise, such as seeing a big difference between 992 and 1005, and particularly that small samples, like a single round, are expected to be precise.

Which brings me to agreeing that "....particularly only a single round's worth of ratings, is flawed from the jump."
 
Last edited:
Top