• Discover new ways to elevate your game with the updated DGCourseReview app!
    It's entirely free and enhanced with features shaped by user feedback to ensure your best experience on the course. (App Store or Google Play)

Par Talk

Which of these best describes Hole 18 at the Utah Open?

  • A par 5 where 37% of throws are hero throws, and 21% are double heroes.

    Votes: 0 0.0%

  • Total voters
    24
  • Poll closed .
I'm sure if I leaf through the 359 pages, that pie chart would mean something to me. Which courses does this represent? Was this tournament play? And all that shows are actual scores, it doesn't tell me if the "experts" in the field made any "errors". Nor does it tell me how an error is defined.

Oh, boy, this is where it's obvious you're new to the thread. Lessee if I can provide some salient points:

The definition of par that Steve uses is the PDGA definition
Steve offers a player rating of 1000 as expert under the definition
When he reports scores, he's using results from tournaments
He also tries to use scores from experts--not from those rated significantly lower or higher

There are sub-threads of discussion on such things as errors and whether we can identify them and so forth. There's a whole lot of interesting information in the thread...and a whole lot of fresh, bovine excrement.
 
Issues with defining par based on this definition:


-Part of that definition is "as determined by the director", not "as determined by math"
-who is an expert? and if you're basing who an expert is on their rating, then just use the PDGA's rating-based standards.

This definition is EXTREMELY subjective, and therefore a terrible starting point to determining a par standard. Especially when the PDGA already has a much more objective standard already: https://www.pdga.com/files/par_guidelines_may_2017.pdf

That definition was the starting point for the PDGA Guidelines. More precisely, I derived a formula from that definition and the guidelines are largely based on the results of that formula being applied to a database of scoring data from hundreds of holes.

Note that the guidelines to not actually say an "expert" is a Gold level player. I think interpreting "expert" as a 1000-rated player is a no-brainer, but TDs are still free to interpret it however they want.

-what constitutes an error? If you hit a tree and get kicked next to the basket, is that an error? If you chain out an ace and the disc then rolls out of bounds, is that an error?

I would call those errors.

The math is based on the idea that a large majority of all throws by experts are errorless and expected, without actually identifying which players actually performed errorlessly.

Also want to add (as I'm sure others have) that deciding whether a hole plays closer to a par 3 or par 4 is completely and totally pointless. It doesn't make the hole good or bad, and it doesn't make it more or less legitimate. From a statistical standpoint, the only things that matter in separating good holes from bad holes are (and this is just my opinion):

1) Having a decent scoring spread, and most importantly
2) having a high correlation between a player's score on that hole and that player's rating.

Those are the two best statistics for determining the quality of a hole as a fair and effective test of skill.

I agree par is totally useless as a measure of the quality of a hole. However, par is not pointless. It has other jobs to do. It does those jobs better when it is set better.
 
Oh, boy, this is where it's obvious you're new to the thread. Lessee if I can provide some salient points:

He also tries to use scores from experts--not from those rated significantly lower or higher

Yes sir, new to the thread, but not to the game. Thanks for the info. It is good to know what definitions are being used. I assumed an expert would be one at the highest levels.

It seems to me looking at actual scores from a tournament is not helpful in determining par - that is more helpful in determining player rating. Again we don't know what the playing conditions were, etc.

I can see how this conversation is a giant rabbit hole, and may be more relevant for people who are designing courses.
 
Yes sir, new to the thread, but not to the game. Thanks for the info. It is good to know what definitions are being used. I assumed an expert would be one at the highest levels.

Not the very highest levels. A par that was set for the highest levels would not be as useful as a par that applies to a rating where there are more players. One of the reasons to pick 1000-rated is because that's about the highest rating where there are a significant number of players at and above that rating.

The following graph shows the ratings of the players in the tournaments for which I have scores. These are a select group from all tournaments - being well-run enough to use PDGA Live Scoring or Udisc or Metrix. So, these events are high-rating-heavy compared to the average tournament.

attachment.php


It seems to me looking at actual scores from a tournament is not helpful in determining par - that is more helpful in determining player rating. Again we don't know what the playing conditions were, etc.

Par is what happens under ordinary conditions. Ordinary conditions are generally the most common, so it isn't hard to get ordinary data. Picking the most expected score is not a sensitive exercise. A wide range of conditions will usually generate the same expected score.

Of course, a round should not be used if it was played under extraordinary conditions. A TD adjusting pars for next year would know what the conditions were. For the tournaments I've looked at, there is enough coverage to know when conditions were not ordinary.

I can see how this conversation is a giant rabbit hole, and may be more relevant for people who are designing courses.

The big takeaway I'd like to get across is that TDs can and should set pars that are more appropriate for the Open division. The two biggest mistakes are: using tee sign pars that were set for a different skill level, and adding two throws to the number of drives.

There has been noticeable improvement in pars since this thread was started. It must be working.
 

Attachments

  • RoDbR.png
    RoDbR.png
    17.1 KB · Views: 58
If I read PM's posts correctly, the method (distance or shots to green plus two) is the de facto definition, and the actual definition is ignored. Not sure why they have the latter, but that's how I read it.

The golf talk strikes me as a diversion. Disc golf is a different sport; just because we're using the same terminology doesn't mean we have to use it in exactly the same way. We can follow the definition, if we want.

Exactly.

And I'll add a little to that second part. Putting being so easy and the lack of tee boxes is what makes this so controversial vs. golf. You never see this argument in golf because par works so well.

Steve would have us basically never making a birdie with a putt.
 
Steve would have us basically never making a birdie with a putt.

Except where there is a drive and/or approach that is better than expected.

Or where the expected drive and/or approach leave the player with a putt that they are not expected to make, but still might. For example, in the range where experts make them 30% of the time.
 
And I'll add a little to that second part. Putting being so easy and the lack of tee boxes is what makes this so controversial vs. golf. You never see this argument in golf because par works so well.

Indeed. Even if golf is ignoring its own definition (the expected score of an expert), the formula seems to produce results that agree with it, for the great majority of holes.
 
Except where there is a drive and/or approach that is better than expected.

Or where the expected drive and/or approach leave the player with a putt that they are not expected to make, but still might. For example, in the range where experts make them 30% of the time.

Better than error-less? No, I really don't want to get into that lol
 
Oh, boy, this is where it's obvious you're new to the thread. Lessee if I can provide some salient points:

The definition of par that Steve uses is the PDGA definition

Steve doesn't actually use the PDGA definition. Instead, he is offering a new definition using the parts of the PDGA definition that he likes and ignoring the parts that he doesn't like and then interpreting it with a statistical method designed to reduce the number of birdies.

Steve offers a player rating of 1000 as expert under the definition

Oh, so you already realize that he's not using the PDGA definition. Weird.
 
Steve doesn't actually use the PDGA definition. Instead, he is offering a new definition using the parts of the PDGA definition that he likes and ignoring the parts that he doesn't like and then interpreting it with a statistical method designed to reduce the number of birdies.
...
Oh, so you already realize that he's not using the PDGA definition. Weird.


Before any method can be developed, the terms Expected, Errorless, and Expert all need to be given meaning. That does not mean the method does not follow the definition. It means the method follows the definition by selecting one of many possible interpretations.

For example, I doubt there is much disagreement with the idea that "1000-rated player" is one valid interpretation of what "expert" means. That does not mean the term "expert" is being ignored.

I've selected among interpretations by looking at the practical results. Reducing the number of birdies was never a goal. The goal was to make par work better. One example is that getting a birdie should actually mean that the player gained against the competition while a bogey should mean the player lost ground.

The only birdies that are going away are the ones that players got because par was too high. Those were on the holes where players expected a birdie.

No one screams louder or more incoherently than someone who fears being treated fairly after having gotten something for nothing.
 
Before any method can be developed, the terms Expected, Errorless, and Expert all need to be given meaning. That does not mean the method does not follow the definition. It means the method follows the definition by selecting one of many possible interpretations.

For example, I doubt there is much disagreement with the idea that "1000-rated player" is one valid interpretation of what "expert" means. That does not mean the term "expert" is being ignored.

1000-rated is certainly a valid interpretation of what "expert" means. But, mathematically, it's just an arbitrary point on a number line, even if it is nice and round.

Let's say I preferred to use players rated 936 as my "expert". At that rating, they are probably better than 99.9% of the population at disc golf. Wouldn't that make them an expert?

Or maybe I want my benchmark to be the best player in the world, and he alone (after all, that is the one person who is indisputably an expert). Wouldn't that be less subjective and filter out ALL of the fraudulent birdies?

The point is even if the method is mathematically sound, it is still subjective and ultimately (in my opinion, of course) unnecessary with the PDGA having their own guidelines.
 
I'm not sure I can wrap my head around the concept of a 1000-rated player being considered an "expert". I am roughly a 945 rated player, meaning on average I am 5.5 strokes behind a 1000-rated player each round. I am a decent player but I'm definitely not 5.5 strokes away from being an expert by any means.
 
1000-rated is certainly a valid interpretation of what "expert" means. But, mathematically, it's just an arbitrary point on a number line, even if it is nice and round.

Nice and round is not insignificant; it makes it more likely to be remembered and adopted by everyone.

Also, when ratings were invented, the skill level associated with 1000 was not picked arbitrarily. It had meaning. If I recall, it was where last cash would be in the biggest events. As it turns out, basing par on the skill level that is last cash produces pars that are as close as possible to the scores of the group of cash winners. Which maximizes the information provided by par. So it's got that going for it. Which is nice.

Let's say I preferred to use players rated 936 as my "expert". At that rating, they are probably better than 99.9% of the population at disc golf. Wouldn't that make them an expert?

As a TD you could do that.

It wouldn't be a rating that most TDs would agree with for the Open division (where the minimum suggested rating is 970). Agreement on a particular skill level is needed to make par comparable across all events. I'd bet 1000 is more widely remembered and accepted than - what was your number?

Or maybe I want my benchmark to be the best player in the world, and he alone (after all, that is the one person who is indisputably an expert). Wouldn't that be less subjective and filter out ALL of the fraudulent birdies?

Actually, choosing the best player in the world has some things going for it. Even par would almost always win or almost win. That's the expert to pick if the goal is to eliminate all birdies. That's not my goal.

One thing I don't like is that it makes most players' scores farther from par (this time, in the over direction). So, instead of the situation in the past where it was a mystery how many "birdies" it would take to cash, we would swing the other way to guessing how many "bogeys" last cash can survive.

At 1000-rated, the expected answer to both is always near zero. The closer more contenders' scores are to even par, the more useful par becomes. Less arithmetic, basically.

Also, it would be very hard to collect scoring data. Many tournaments do not have the best player in the world there. But there are usually enough players near 1000 rating to figure out the scoring distribution.

The point is even if the method is mathematically sound, it is still subjective and ultimately (in my opinion, of course) unnecessary with the PDGA having their own guidelines.

You know the Gold par guidelines are based on my method for a 1000-rated player, right? The trouble with those guidelines is that scores are not very tightly tied to distance. Using the guidelines is just fine, but they're not as accurate as actually looking at scores.
 
I'm not sure I can wrap my head around the concept of a 1000-rated player being considered an "expert". I am roughly a 945 rated player, meaning on average I am 5.5 strokes behind a 1000-rated player each round. I am a decent player but I'm definitely not 5.5 strokes away from being an expert by any means.

Interesting point.

Any 1000-rated players out there want to weigh in on whether they feel like an expert?
 
Interesting point.

Any 1000-rated players out there want to weigh in on whether they feel like an expert?

lol, perhaps I don't give myself enough credit (doubtful) but it definitely seems like the very good players in our game are still fairly close to the average players, more so than in (ball) golf. I don't have a huge arm, but I can still reach most greens in par minus two strokes, where in (ball) golf, I was much closer to a bogey + player even though I could outdrive most people I played with. And in (ball) golf I averaged far less than 2 putts per green, but only because I was not on the green with my approach shot and wound up chipping close to the pin on any holes.
 
Also, when ratings were invented, the skill level associated with 1000 was not picked arbitrarily. It had meaning. If I recall, it was where last cash would be in the biggest events.


Interesting (I didn't know that). That definitely makes a difference.
 
Top