• Discover new ways to elevate your game with the updated DGCourseReview app!
    It's entirely free and enhanced with features shaped by user feedback to ensure your best experience on the course. (App Store or Google Play)

Top Player Ratings Over Time

This is not actually correct. They are relative to what the current field has shot over the past year or the past "x" number of rounds.
I never said it didn't include those "current" rounds as would be inferred by current rating, but you are trying to compare from different decades.
 
IF my belief is correct then the slow process of sinking collective rating would manifest itself to a greater degree over a longer period of time- butterfly effect and all that jazz. I just threw out 2 different dates that were sufficiently far apart to make the point.
 
Last edited:
There are two aspects to consider for long term implications. First, we do drop about 2% of the lowest rated rounds which slightly boosts ratings over time. Second, we do not know if the average skill level of the newer players since 2010 is the same, higher or lower than the pool of players with ratings prior to 2010. One might speculate that the influx of new players will average lower level skills once they become propagators since they are later adopters than the group before 2010. So it's not easy to tease out these multiple elements to consider whether the overall ratings have changed.

The one thing you can look at is whether the SSA for the same events over time goes up or down. When I looked at this around 2008, the SSAs were very stable at courses like Knob Hill that did not change over that time period. However, even if the tees and pins were the same, the foliage had grown and changed over that time period.
 
The one thing you can look at is whether the SSA for the same events over time goes up or down. When I looked at this around 2008, the SSAs were very stable at courses like Knob Hill that did not change over that time period. However, even if the tees and pins were the same, the foliage had grown and changed over that time period.
We do know technology changed. Typically heavily wooded courses beat-in over time and open up new routes unless they are neglected or have new foliage added.

I found it interesting that the Memorial used have a much higher SSA when it began. Obviously a major change occurred with the course/s.
 
Actually the Fountain and Vista have had very stable SSAs over the past five years or so when the layouts have remained pretty much the same. That's even accounting for the few abnormal windy rounds. There's no indication from some of our course designer data that new disc technology has improved scores on several courses that have been analyzed except maybe on par 4 and 5 holes. Even though players are throwing farther, they may not be more accurate on par 3 holes they can reach.
 
Actually the Fountain and Vista have had very stable SSAs over the past five years or so when the layouts have remained pretty much the same. That's even accounting for the few abnormal windy rounds. There's no indication from some of our course designer data that new disc technology has improved scores on several courses that have been analyzed except maybe on par 4 and 5 holes. Even though players are throwing farther, they may not be more accurate on par 3 holes they can reach.
I was looking at the Memorial from early 2000s. Top scores were around 70 per round.
http://www.pdga.com/tour/event/2829
 
Layouts were quite a bit different then. I can't remember when they switched each course to the current layouts but it's less than 10 years ago.
 
There are two aspects to consider for long term implications. First, we do drop about 2% of the lowest rated rounds which slightly boosts ratings over time.

I always found this aspect interesting, and it definitely accounts for at least part of the reason why ratings have kept creeping up over time.

My question is - in a very basic manner - When dealing with statistical groups like this, why is the lowest rating, which is x (x being whatever assigned number PDGA uses) deviations lower than the mean thrown out, while ratings that are x deviations above the mean still used? Statistically it makes no sense, as both results are outliers by PDGA definition, and in any other statistical trending analysis would be thrown out or proven with an outlier Q-Test (where applicable)

Summary: If the bottom 2% of rounds aren't used in ratings calculations, then why are the top 2% of ratings included in that calculation?
 
The study I did on ratings decline with age for established pros showed no loss until age 40 then about 10 points every five years until age 60 then 20 pts per 5 years to 70. Not enough data after that.

So I got a good 7 years before my rating drops 10 pts on average? Excellent.
 
Both ends of the statistical range are not equal. You can manipulate your score at the lower end of the range but not the higher end of the range. Thus, dropping the lowest 2% prevents manipulation at that end of the range.
 
Both ends of the statistical range are not equal. You can manipulate your score at the lower end of the range but not the higher end of the range. Thus, dropping the lowest 2% prevents manipulation at that end of the range.

I completely understand why you'd throw out the lower rounds, but I think that throwing out the higher rounds is silly.

At the top levels it won't matter as much because they are more consistent. Their round ratings are less likely to be outside the allowable standard deviation range on the high end (on either end really, when talking about the top 100). For amateur players, that doesn't work as well, as lower rated players can easily shoot a one off round that would be well outside of the 'allowable' deviation range. We just include it because they had a super hot round?
 
I think it's interesting (or whatever) that Climo's peak of 1044, first reached in Sep '02, stood until being tied in Aug '10 and finally broken by Feldberg at (1046) in Mar '11. So it was over 8 years as the top rating. And 5 additional men have topped 1044 since then.

Similarly, Juliana's top 968 stood from May '02 until tied by Val Oct '11 and broken (969) Mar '12, so nearly 10 years on top. And 3 additional women have topped 968 since then.
 
Josh Anthon - 1043 currently
Simon Lizotte - 1041 in 2015
Cale Leiviska - 1041 in 2012
Jesper Lundmark - 1041 in 2009
Darrell Nodland - 1040 in 2008
 
Shane Seal - 1040 in 2011
Eagle McMahon - 1041 in 2018
 
At the risk of sounding a bit out of it... does a supposed long term drift matter? Unless you want to use player ratings to compare players from different eras, which is not at all their purpose, and a rather silly exercise regardless of basis, they're good enough (though admittedly no better) for comparing active players at their recent level of play.
 
I'd love to see this chart get updated when Eagle hits 1060 at the end of 2019.
 
Just my two cents, but it seems like the top pro ratings *should* be creeping up as the sport grows larger, just like we're seeing. There's nothing strange about the best player out of a group of 100,000 players being better than the best player out of a group of 1,000 players. Whether that's enough to fully account for the rating increases we've seen is another question, but that seems like it would at least partially explain the rating creep we've seen.
 
Matt Orum - 1041 past Nov.
Eagle McMahon - 1048 past Oct.
 

Latest posts

Top