• Discover new ways to elevate your game with the updated DGCourseReview app!
    It's entirely free and enhanced with features shaped by user feedback to ensure your best experience on the course. (App Store or Google Play)

Round Ratings?

Technically, that doesn't matter at all.

Theoretically.

Yup. The only reason that lower ratings might have that kind of an effect is because lower ratings tend to belong to more volatile players. Or more specifically, they often belong to players who are rapidly improving so it's more likely that such players will out play their rating by several throws. If there are enough such players that do that, the result could be a lower SSA and ratings.
 
And this just shows how ratings can be regionally affected.

which IMO is BS... ive heard people comment on videos how they were going to play local and get their ratings up or different crap. None of it seems to be standardized or fair by any means.

....and IMO, your opinions are BS. There are only very minor regional biases in ratings....around 1 throw (10 SSA rating points).

If you can prove otherwise, show me your data.

Here is my data - taken from Pro Worlds 2013. This is valid data as it is based on a large field size, lots of rounds, and consistent weather throughout the event.
 
I played an event this weekend that JohnE McCray was at and he smashed the course records during both rounds. This resulted in my rounds being rated about 25 points worse last weekend than the previous tourny at this course.

For example, I shot a 61 at this course in may for a 916 round, last weekend I shot a 60 and it was an 899 round! Ratings can be strange for sure.

I played in this tourney as well, both my travelling partner and I were able to guess our round ratings before the unofficial results were posted based on how we thought we played. We were both dead on. I don't think McCray affected the ratings that much. He sure shot some hot rounds though!
 
Based on what Chuck Kennedy has told me about the PDGA rating system rounds 1, 2, & 4 line up ( I didn't do the math on the 3rd round). What I think is the big downside of the ratings calculations is the IN = OUT nature. Take all the ratings of the propagators in the round and find the average. Then find the round rating for each of those props and take the average - both of those numbers should be relatively close.

Round 1: Avg. Player Rating = 953.4.
Avg. Round Rating = 952

Round 2: Player Rating = 947.7
Round Rating = 953.25

Round 4: Player Rating = 953
Round Rating = 949

That is assuming those were the only propagators for those round ratings. If other divisions played the same layout then you would have to average those players and their rounds into the equation as well.


Here is where (I think) the rating system doesn't hold up. Take 5 pros and put them on course X. The average rating of the players is 1000. They shoot a 48,49,50,51,52 for their scores. The average round rating will be a 1000.

Now take a separate group of players that have an average rating of 950 and put them on the same course under the same conditions. They shoot the same 48,49,50,51,52. The average round rating will then be 950.

There is, from what I can see, very, VERY, little influence from the actual course SSA that is factored into the round ratings. But the ratings equations is kept under lock and key so there is no way to determine exactly why this anomaly is allowed to happen.
 
Based on what Chuck Kennedy has told me about the PDGA rating system rounds 1, 2, & 4 line up ( I didn't do the math on the 3rd round). What I think is the big downside of the ratings calculations is the IN = OUT nature. Take all the ratings of the propagators in the round and find the average. Then find the round rating for each of those props and take the average - both of those numbers should be relatively close.

Round 1: Avg. Player Rating = 953.4.
Avg. Round Rating = 952

Round 2: Player Rating = 947.7
Round Rating = 953.25

Round 4: Player Rating = 953
Round Rating = 949

That is assuming those were the only propagators for those round ratings. If other divisions played the same layout then you would have to average those players and their rounds into the equation as well.


Here is where (I think) the rating system doesn't hold up. Take 5 pros and put them on course X. The average rating of the players is 1000. They shoot a 48,49,50,51,52 for their scores. The average round rating will be a 1000.

Now take a separate group of players that have an average rating of 950 and put them on the same course under the same conditions. They shoot the same 48,49,50,51,52. The average round rating will then be 950.

There is, from what I can see, very, VERY, little influence from the actual course SSA that is factored into the round ratings. But the ratings equations is kept under lock and key so there is no way to determine exactly why this anomaly is allowed to happen.

The bolded would be because there is no such thing as "actual course SSA" at all. It's not an anomaly at all. The ratings formula is not intended to rate courses, it is intended to rate players. The course itself is irrelevant to the calculations.
 
The bolded would be because there is no such thing as "actual course SSA" at all. It's not an anomaly at all. The ratings formula is not intended to rate courses, it is intended to rate players. The course itself is irrelevant to the calculations.


I would agree with you, but IIRC that's not what I was told by Chuck. It was made to sound as though the round ratings for each player were based more on how they played that layout in those conditions and less to how they played compared to the other competitors, but the actual calculations don't agree.

I'll look for the emails tonight to reaffirm and post what was relevant.
 
Now take a separate group of players that have an average rating of 950 and put them on the same course under the same conditions. They shoot the same 48,49,50,51,52. The average round rating will then be 950.

That is extremely unlikely.

In the miniscule chance that it does happen, you can blame it on small samples.

Take 500 of each group, and it will never happen.

I know, you'll never get 500. So take 25 of each, 5 tournaments, 20 rounds, and it'll all start to sort itself out.
 
That is extremely unlikely.

In the miniscule chance that it does happen, you can blame it on small samples.

Take 500 of each group, and it will never happen.

I know, you'll never get 500. So take 25 of each, 5 tournaments, 20 rounds, and it'll all start to sort itself out.

You are correct. And that was what Chuck argued as well. In any real tournament the likelihood of that happening is too small to worry about. It's even a more remote chance when you add in a normal field of competitors. My point (and I like arguing as the devil's advocate on this particular issue) is that if the equation were set properly to account for how a course actually played, just like a ball golf handicap is determined by the USGA, then this shouldn't even be a possibility. If a person plays a course in a X layout under Y conditions and shoots course par, then their round rating should be the same as anyone else under those same criteria. It should not be based on other people playing with you at that same time and their collective player rating average.
 
You are correct. And that was what Chuck argued as well. In any real tournament the likelihood of that happening is too small to worry about. It's even a more remote chance when you add in a normal field of competitors. My point (and I like arguing as the devil's advocate on this particular issue) is that if the equation were set properly to account for how a course actually played, just like a ball golf handicap is determined by the USGA, then this shouldn't even be a possibility. If a person plays a course in a X layout under Y conditions and shoots course par, then their round rating should be the same as anyone else under those same criteria. It should not be based on other people playing with you at that same time and their collective player rating average.

So basically, you think the player ratings should be 100% different from what they actually are? Fine if that's your opinion, but pretty damn near impossible to pull off given the diversity of disc golf courses.
 
Bearing in mind that the main reason for ratings is to separate amateur players into divisions of roughly equal ability. Aberrations in a single round, or even an entire event, average out when a bunch of events are accumulated, at least to the point that any variations are pretty meaningless for that purpose.

The other aspects of ratings---the 1000-point line, the highest rated round ever, a person's personal best, course SSA---are side benefits. It's not designed to produce very precise results for those aspects. It's the expectation that it should that seems to get under people's skins.
 
So basically, you think the player ratings should be 100% different from what they actually are? Fine if that's your opinion, but pretty damn near impossible to pull off given the diversity of disc golf courses.

Different ratings - no. How they achieve the ratings - yes.

I think the USGA handicap system is a very good model and can translate well, with some modifications, to disc golf. Each course not only has a course par but also a slope which determines how difficult that course is. You could have two disc golf courses that are the same length for all 18 holes but one is fairly wide open and flat; the other is in the middle of a mature wooded area with lots of elevation changes. The second course would have a much higher Slope. And shooting course par on the second would be a higher rating than the first. But the main point is that those ratings are independent of how other people shoot on that course and whether they are rated higher or lower than you.
 
Though I kind of agree that disc courses need to add a "slope" element to the ratings equation, who is gonna pay one person to go out and rate every course in the world? There is enough "local bias" for the course ratings on here, that I am not sure you could trust locals or designers to give an accurate rating every time.




Hey Martin sounds like a good job for you, since you already have over 1k, lol.
 
Last edited:
It is not too hard to make up your own model in excel for predicting round ratings. There is not any magic, or course ssa factored in as some of you disillusioned souls may think.

I used regression to create an equation for the points per stroke facet based on a few large events with varying average scores, and the rest is on the pdga website. Average score of props=the average of their player ratings.
 
I'm curious at to how much SSA varies, anyway, the initial topic of this post notwithstanding. Of the 3 courses I'm most familiar with, 2 of them have had amazingly consistent SSAs over the years. The 3rd changes its format from round to round, so who knows? (And who cares?)

Just as small samples lead to variations, if you take, say, 10 to 20 rounds on a given course---assuming no major changes, and reasonably good weather---the average 1000-rated round should give you a figure that's pretty indicative of the scoring challenge on that course. And I'd bet that the next 10 or 20 rounds come very close to being the same thing.
 
One thing that mucks things up is that many courses have multiple pin positions, which can vary in difficulty GREATLY. In regular golf you have a green, the hole can be anywhere on that and that's their cheif limitation. Disc golf, we can put the basket any damn where we want it. 130' hole? sure. B pin at 580', no problem.
 
We have a hole that is 223' in one layout, 714' in another.

Yeah, that's one of the things that really affects the SSA, limiting its usefulness from one event to another, and would really make it hard to develop another system of course rating. I've played events where they mix-and-match long tees and short tees in different rounds. Or use a different combination of tees. Or add O.B. for tournaments. Or add temporary holes. I know of courses where the difficulty level changes by time of year. And so on.

The advantage of the current system is that it measures the course under the conditions that its being played. The usefulness of that measurement, beyond creating player ratings, may vary. But, with whatever flaws, it makes more sense to me than any other system I've seen proposed.
 

Latest posts

Top