- Joined
- Dec 19, 2009
- Messages
- 6,871
Thanks. Looks like 13 out of 67 who finished were 1000 or above.
Were you wondering whether the results were reliable?
I got the equivalent of 62.84 rounds of data out of that field.
Discover new ways to elevate your game with the updated DGCourseReview app!
It's entirely free and enhanced with features shaped by user feedback to ensure your best experience on the course. (App Store or Google Play)
Thanks. Looks like 13 out of 67 who finished were 1000 or above.
Were you wondering whether the results were reliable?
I got the equivalent of 62.84 rounds of data out of that field.
No, not questioning the reliability. I was wondering if the field was a good representation of "1000 rated players." I'm not a big stats guy so this may be a bit remedial.
I trust that your data is reliable, and valid, for a 1000 rated player. Would you just use a straight average of the individual player ratings to get the average rating for the field? If so, I was wondering if the par set for the course was closer to reality based upon the actual players, vs. a theoretical 1000 rated group of players.
Hopefully that makes sense. And I do enjoy perusing your data. :thmbup:
I don't care what the average rating for the field is. The bottom part of any field shouldn't really even be playing in Open, so who cares how they play? It irks me when a random assortment of various sizes of groups of no-chance donators are included in "this hole is averaging over par" and comments like that.
I apply weights to hole scores, but I don't get an average score. I apply the weights to each score independently; not what the player scored, but whether that player got each possible score. For example, when looking at the frequency of 2s, I count one if the player got a 2 and zero if they did not.
One of the constraints on the weights is that, when they are applied to the ratings of the players, they always generate a weighted average rating of exactly 1000. The other is that ratings far from 1000 get lower weights. The most weight a player can get is 1 per round. The sum of these weights for all players over all rounds is the effective number of rounds.
So, that gives me (in the example) the frequency of a theoretical 1000 rated player getting a 2. It is based on the actual play of the actual players. These frequencies go into the par calculation.
I can generate pars even if there are no exactly 1000-rated players. If there are players fairly near, and both above and below 1000, the data is still reliable. It's no stretch to say that a 1000-rated player will get a 2 with a frequency between that of a 1010 rated player and a 990 rated player.
If all the players are on one side of 1000 or the other, I calculate par based on an extrapolation. Better than nothing, but it might not rise to the level of reliable if no players are near 1000.
Hey Steve, can you do a comparison of Blue Lake 2014 with the Portland Open data?
Sure. If you supply the data.
Don't you have access to the PDGA live scores?
Hey Steve, can you do a comparison of Blue Lake 2014 with the Portland Open data?
You mean Pro Worlds? I can do that.
I don't think any major tournaments (e.g. significant touring pro attendance) have been played at Blue Lake since 2014 Worlds.
I hear several holes have been altered; I wonder if they're in the group of {3,4,5,6,9,13,14,18} (big group, btw -- I'm kind of surprised given that Feldberg is credited with the design).
It will be interesting to compare the scoring five years later.
Thanks!
Way back then, some people thought you could raise par and make a course better by adding penalties all over the place. We'll see if that's still the case.
^...and the ones who didn't.
Did you happen to save the tee times?