• Discover new ways to elevate your game with the updated DGCourseReview app!
    It's entirely free and enhanced with features shaped by user feedback to ensure your best experience on the course. (App Store or Google Play)

[Request] Let's Settle The Ratings Mystery

Let's say Paul plays 5 tournaments, shoots the best score and ultimately his rating goes up. Now he skips the next 5, your rating doesn't change if you don't play. So now someone else (or several people) play well enough during the next 5 tournaments (that Paul doesn't play) and their rating goes up. So...the next time all of them play in a tournament the "ratings points in" has increased. Is this not plausible?

I think I understand now why some local pros say "c tiers are rating killers". Probably because you have so many chances for low rated ams to shoot lights out which would affect the pros rated round if they shot the same thing. Probably also explains why they would prefer MPO to play longs and the ams from the shorts to keep the rounds rated separately. hmmm....
"Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes." - Goodhart's Law.

For the same reasons that the stock market is a garbage indicator of economic health right now, any system you can imagine in order to critique would be prone to the exact same criticisms you're applying to the PDGA rating system. When you start targeting the system with explicit factors designed purely for its manipulation: it ceases to be a good system.

Any system with enough completely ridiculous parameters and inputs applied is going to crumble.
 
Let's say Paul plays 5 tournaments, shoots the best score and ultimately his rating goes up. Now he skips the next 5, your rating doesn't change if you don't play. So now someone else (or several people) play well enough during the next 5 tournaments (that Paul doesn't play) and their rating goes up. So...the next time all of them play in a tournament the "ratings points in" has increased. Is this not plausible?

I think I understand now why some local pros say "c tiers are rating killers". Probably because you have so many chances for low rated ams to shoot lights out which would affect the pros rated round if they shot the same thing. Probably also explains why they would prefer MPO to play longs and the ams from the shorts to keep the rounds rated separately. hmmm....

Paul shooting the best score, won't cause his rating to go up. He could win, and see his rating go down.

What would cause his rating to go up would be beating the entire field by an ever-increasing number of strokes. And as he does, he pushes other people's ratings lower. (Fractionally, if it's a decent-sized field).
 
I think I understand now why some local pros say "c tiers are rating killers". Probably because you have so many chances for low rated ams to shoot lights out

It probably has more to do with a lot of C tiers being courses where there wont be enough scoring separation to get rounds rated as we are used to. 10 points is roughly equal to a stroke.

But if 800 rated players are playing even and 1020 rated pros are playing -8 its going to mess with the statistics. It doesn't mean a few lower ranked AMs are shooting lights out to skew ratings for top level guys, it is the scoring across the whole field.
 
I think I understand now why some local pros say "c tiers are rating killers".

This is a myth, plain and simple.

Typically folks don't travel far for C-Tiers so they know the course well.....and so what I think they are really saying is something like, "I play this course a ton, and I did not score as well as I expected myself to while trying harder than usual".
 
In a closed system, ratings will drop slightly due to players playing a certain amount below their expected score will not be a part of the average that is used to compute ratings. e.g. 10 players with the same rating all shoot 50 except for one guy who had a really bad day and shot 60. Only the 9 shooting 50 are used to compute the average. Their rating doesn't go up, but the wish-I-stayed-home-player's rating drops. So now the average rating for those 10 players have dropped.

But we aren't in a closed system, players enter into the pool and drop out of the pool. As players drop out of organized play, assuming they rated lower than the global average, the average rating of the entire pool is going to slide upwards.
 
So for a few years now, I've been asking about a simulated scenario similar to this:

MPO Division: 50 players all rated 1000.
REC Division: 50 players all rated 800.

Both play the same course, except REC plays 1 hole shorter. This way each division is rated on it's own, which is the key for this scenario.

Both divisions, all players shoot the same score of 54. What is the round rating going to be for each division?

The public answer from those in the know, has always been "this scenario will never happen due to the difference in skills sets". Well... what IF it actually happened?

I talked to a TD today who is going to work with me and try to get a clear answer, but it's not going to happen for a few weeks or so because he is out of town. (He's very curious about this as well. lol..) So I thought I would post this here and see if any other TD wants to plug in this scenario using their TD tools and post the results. :popcorn:


Someone once told me (well...ok more than once) to play the pro divisions because you will get higher rated rounds. He also said... "ratings in / ratings out". You don't agree with that guy?

But if the only different factor was that one hole 10 ft shorter, which causes them to be rated separately, are you saying the 1000 rated player wouldn't have the higher rated round?

You apparently have some mystery point you are trying to make. Just because "someone" told you something and you "talked" to an actual TD, doesn't prove anything. Please listen to the pod cast with Chuck. It really will help you understand ratings. When you learn something from Chuck, it will be real facts.

There are many other ratings threads here on DGCR started by players like yourself who want so badly to believe there is something "wrong" with the ratings system. The only thing every new ratings thread proves is that Chuck knows best! Because Chuck is THE expert on ratings.



But you can continue to argue about some imaginary situation, because "someone" told you (more than once)......
 
Paul shooting the best score, won't cause his rating to go up. He could win, and see his rating go down.

ok... now this might help me understand things better. So to help clarify this, let's say Paul only plays DGPT events the entire year. He wins every one and his rating can still not improve? If this is true, then it's definitely the missing piece of the puzzle as far as my understanding of ratings. Thx.
 
ok... now this might help me understand things better. So to help clarify this, let's say Paul only plays DGPT events the entire year. He wins every one and his rating can still not improve? If this is true, then it's definitely the missing piece of the puzzle as far as my understanding of ratings. Thx.

It's not guaranteed that it would increase. In a real world situation it seems very likely it would increase but there's nothing inherent in the system from what we're told about it that suggests it must increase.
 
ok... now this might help me understand things better. So to help clarify this, let's say Paul only plays DGPT events the entire year. He wins every one and his rating can still not improve? If this is true, then it's definitely the missing piece of the puzzle as far as my understanding of ratings. Thx.

Imagine a pool of 10 MPO players, in every event.

Paul wins each one---with others finishing behind him by 1,2,3,4,5,6,7,8,9 strokes, respectively. In other words, he beats the field by 45 strokes.

If he does that the first tournament and the last tournament, he hasn't improved, relative to the field---so his rating hasn't changed. He's exactly the same amount better than the field, at every tournament.

If at the last event, the everyone else finishes one stroke behind him, he's only beaten the field by 9 strokes. His rating will fall. even though he won.
 
Imagine a pool of 10 MPO players, in every event.

Paul wins each one---with others finishing behind him by 1,2,3,4,5,6,7,8,9 strokes, respectively. In other words, he beats the field by 45 strokes.

If he does that the first tournament and the last tournament, he hasn't improved, relative to the field---so his rating hasn't changed. He's exactly the same amount better than the field, at every tournament.

If at the last event, the everyone else finishes one stroke behind him, he's only beaten the field by 9 strokes. His rating will fall. even though he won.

So the ratings of the field doesn't matter in this scenario? What if several of the player's ratings improved during this time span? The "ratings in" would be higher, wouldn't that influence round ratings? btw.. Thank you for all of your replies. :thmbup:
 
So the ratings of the field doesn't matter in this scenario? What if several of the player's ratings improved during this time span? The "ratings in" would be higher, wouldn't that influence round ratings? btw.. Thank you for all of your replies. :thmbup:

I believe DavidSauls is assuming the ratings of the whole field are constant (or at least vary too little to matter).

If several of the opponents ratings all increased, the rating system is anticipating that their play strength has increased. Therefore if Paul keeps beating them by the same amount his rating will increase too because we anticipate that his skill has also increased as evidenced by the results being the same while much of the field has improved.
 
But you can continue to argue...

Argue? lol.. I'm not arguing anything, just discussing the rating system and trying to understand how it works. An argument would occur if I was trying to change something, which I am most certainly not.

I believe DavidSauls is assuming the ratings of the whole field are constant (or at least vary too little to matter).

If several of the opponents ratings all increased, the rating system is anticipating that their play strength has increased. Therefore if Paul keeps beating them by the same amount his rating will increase too because we anticipate that his skill has also increased as evidenced by the results being the same while much of the field has improved.

ok, that makes sense, thx. But that also goes back to my earlier comments about player ratings getting higher and higher, etc, etc. So in the long run, is there a cap on how high a rated player can get?
 
So the ratings of the field doesn't matter in this scenario? What if several of the player's ratings improved during this time span? The "ratings in" would be higher, wouldn't that influence round ratings? btw.. Thank you for all of your replies. :thmbup:

I believe DavidSauls is assuming the ratings of the whole field are constant (or at least vary too little to matter).

If several of the opponents ratings all increased, the rating system is anticipating that their play strength has increased. Therefore if Paul keeps beating them by the same amount his rating will increase too because we anticipate that his skill has also increased as evidenced by the results being the same while much of the field has improved.

My fault---I'm assuming a pool of the same 10 players at every event, for demonstration purposes. I failed to state that.

If so, the average ratings wouldn't change. For every player whose rating increases, there'll be a corresponding decrease in another player's rating.

This is, of course, an unrealistic example, for demonstration of the math.
 
ok, that makes sense, thx. But that also goes back to my earlier comments about player ratings getting higher and higher, etc, etc. So in the long run, is there a cap on how high a rated player can get?

Why are you worried about player ratings getting higher and higher but not lower and lower?

In David's example the player who comes in last in the 45 stroke diff will keep scoring rounds that have the lowest rating in the whole field. This is as Paul maintains the highest rating rounds in the field. When the field compresses, this last place player scores the same as 8 other players, they will score a much higher round than they would otherwise. This same effect will play out across the field with players round ratings increasing or decreasing depending on where they scored in the field and by what margin.

It's been suggested, and I honestly don't know if it's true or not, that whatever rating a player brings into a round is essentially apportioned back out according to the score in that round. This would make it what's called a zero-sum game, i.e. anything I gain has to be equal to what everyone else lost in sum. In that situation a 'perfect' player could keep moving between groups of players scoring by far the best rounds and accumulate rating points and the only constraint on the player rating is the width of the pyramid of players below them. Then again, given the constraint of possible scores on an 18 hole (is 27 still allowed?) round there will be a practical limit on what the highest rating could be but what that is I wouldn't know.
 
So the ratings of the field doesn't matter in this scenario? What if several of the player's ratings improved during this time span? The "ratings in" would be higher, wouldn't that influence round ratings? btw.. Thank you for all of your replies. :thmbup:
Argue? lol.. I'm not arguing anything, just discussing the rating system and trying to understand how it works. An argument would occur if I was trying to change something, which I am most certainly not.



ok, that makes sense, thx. But that also goes back to my earlier comments about player ratings getting higher and higher, etc, etc. So in the long run, is there a cap on how high a rated player can get?
This is all why I keep posting the following: "Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes." - Goodhart's Law.
My fault---I'm assuming a pool of the same 10 players at every event, for demonstration purposes. I failed to state that.

If so, the average ratings wouldn't change. For every player whose rating increases, there'll be a corresponding decrease in another player's rating.

This is, of course, an unrealistic example, for demonstration of the math.
What David is getting at here is that your examples rely entirely on scenarios that are completely outside of what it is necessary for the rating system to do. The rating system outside of the years of its initial seeding does not exist in a limited population. Players ratings are rising and falling, and they are run through the filter that is the course itself in every instance. How they're distributed by the filter is determined by their characteristics coming in. The system doesn't need to accommodate for 10 800 players scoring the same as 10 1000 rated players, and Chuck has described earlier in this thread some of the boundary parameters used (recommended? So some of those situations may occur, though the touring pros don't deal with it) to avoid the system having to deal with fluke situations that can occur, such as with layouts that are extremely easy.

Right now Paul McBeth has a bigger skill gap between himself and a 1000 rated player than any player in history. The gap between Paul McBeth and a 1000 rated player is almost 2 full strokes greater than the gap between Ken Climo and a 1000 rated player at any point in his career. The ratings, as far as I can tell, are meaningful in discussion of the historic skill of players through the past 2+ decades. And although some communities can experience lag, interaction with other communities quickly cuts into that.
 
The most important part of these hypothetical discussions that's missing is it's the course is more likely to produce odd outcomes than the players. Our courses played are in the middle between the prop ratings and the round ratings produced unlike chess where the course/board is always the same and drops out of the equation so it's truly ratings in = ratings out. But these discussions assume that all courses will produce the same scoring distribution and they don't.

The "perfect" course statistically would produce the same scores and resulting player finish position every time it's played by the same players with the same ratings. We know that both courses and players are statistically dynamic and it just won't happen. In fact that's why lower rated players are willing to enter Open in leagues because they have a shot to cash or even win in a single round event.

A "good" real course produces scores close to the same range as would be expected by the ratings of the props playing that course. In addition, we would expect the final scores to have a good correlation with the player ratings of the props. If you see a round where the scores are completely inverted, i.e., the highest player shot the worst score and the lowest rated player shot the best score, the culprit is much more likely something about the course being different from a "normal" disc golf course that produced the ratings.

In the case where there's close to zero correlation in the scores in relation to prop ratings, the course is essentially producing perfectly random results, i.e., either the luckiest course in history or one that's not testing or rewarding the skills that produced the prop ratings in the first place.

So correlation is one parameter that matters in relation to courses. The other is scoring spread or range as mentioned above. If the output scoring range is narrower than the hypothetical input, the best score will be rated lower than the rating of the highest rated prop in the field. If the scoring spread is wider than predicted (which it usually is), the best round score will get a round rating that's higher than the highest rated prop.

And here's why we have seen what appears to be ratings inflation at the top - Courses with lots of OB penalties typically produce a wider range of scores than the predicted scoring width of the prop ratings. Some courses like Eureka, GBO and USDGC generate at least 6 penalties per player per round on average. This "penalty stroke padding" usually spreads the scores even wider. However, if the number of penalties per player don't correlate well with prop ratings, then the round ratings are more random. Note that long wooded courses typically do not produce as wide of a scoring range as OB laced courses and the correlation with skill/rating can be less.

Point being that both the type of course and how it's designed play a major role in how players perceive ratings and also in relation to par which is a whole 'nother topic. The new frontier for the sport will be breaking out ratings by course type to better see these effects and decide how they should be used to improve the elite tours. If spectator engagement is the main goal, we want to see tighter races and different leaders. Narrower scoring spread and less correlation courses would be desired. However, players who want to take advantage of their distance will prefer courses with penalty padding and wider scoring range that correlate well with skill/ratings.

Since these goals are at odds and both types of courses will be played on tour, the key for tour fairness would be making sure that for a player to be in contention for tour awards, they have to play so many events of the two or three types of courses identified so they can't completely dodge their least favorite type of course and still win the tour.
 
Two things that stand out to me from your post Chuck is that if we had a goal to increase the range of ratings we should be playing on courses that have a higher (true) par. By doing so we should see a greater spread of scores. Or playing on courses with more holes. Basically anything that would increase the number of shots a typical player would take to complete the round. As you say, adding more 'lucky' penalty elements adds random score separation which doesn't necessarily help as much as simply having to cover more ground or more holes.

The other thing I notice is that you mention 'perfect' and 'best' holes returning well correlated score/rating pairs. These obviously represent some kind of average course that tournaments occur on since they align most closely with the ratings that courses are generating from playing on them. Rather than denigrate those that fall outside of that average (I assume your quotes were intended to allay any suggestion of them being of a truly better quality necessarily) it might we need to embrace them to evolve the sport a little. Moving to a preferred set of course styles would involve some pain in terms of more chaotic scoring/rating, which would lead to more exciting finishes I would suggest. - Meh, I'm just repeating you here, sorry.

Would it be preferable to have a variety of holes on one course or a variety of courses with similar holes in a tour?

I do wonder how correlated a course that rewarded lots of rollers would be.
 
Last edited:
I'm really saying a few things. Course type matters (but we don't know yet how much). The influx of highly punitive OB courses presents a "different" game experience such that a separate but parallel rating process is needed where the ratings of the props were produced from playing the same types of OB courses they are now rating when playing them. Whether these courses are better or worse for the sport depends on the goals of our administrators and players. Regardless, the course stats should be separated to allow better analysis and clarity for decisions affecting course design for the future of the sport which will likely diverge from day-to-day play.
 
Right now Paul McBeth has a bigger skill gap between himself and a 1000 rated player than any player in history. The gap between Paul McBeth and a 1000 rated player is almost 2 full strokes greater than the gap between Ken Climo and a 1000 rated player at any point in his career.

Off topic:

So this proves the competition during the Climo era was better than the competition during the McBeth era. :) :popcorn:
 

Latest posts

Top