• Discover new ways to elevate your game with the updated DGCourseReview app!
    It's entirely free and enhanced with features shaped by user feedback to ensure your best experience on the course. (App Store or Google Play)

Weighted Reviews

I was totally surprised by Blackfalls. The one renegade bad review was offset by a bunch of driveby 5s. I really want to play that course. I've wishlisted it, it looks frig'n awesome.

I should be able to compose an excel graphic showing a comparison between current ratings and the adjusted formula method. its just a matter of doing a screenshot of my excel document. There were a few major changes upwards. Maple hill, the Higebridge courses and the IDGC were all heavily boosted using the weighted formula method.

I wasn't so much concerned where on the list they fall but what it does to the rating. I think the assumption was that if you minimize that one weird review of Black Falls it would bump up the rating, but in this case where you minimize both the weird review and the drive by reviews it actually drops the rating from 4.53 to 4.43. The Woodshed has a nominal bump from 4.28 to 4.30. I don't think that is the kind of change a lot of people were expecting.
 
While I see the merit in some sort of way to weight reviews to lessen the impact on the odd "agenda driven rating," I have to ask Tim:

How feasible is it to even have this discussion?

If I understand correctly, to implement such system, course ratings would necessarily be changing, not only based on reviews for that particular course, but based on the continuously evolving status of every person that ever reviewed that course, determined by how DGCR members are voting on the collective body of reviews as a whole.
 
The only part of this idea I'd consider is to weight reviews lower where the reviewer has < x reviews primarily to lessen the impact of drive-by 5's and revenge reviews. I've wanted to do something about that for a long time and it would be interesting to see how it pans out with this test data set and see if the rating # ends up more in line with expectations.
 
Last edited:
As a disc golf course owner I fully understand the consequences of putting my course on this most excellent site. Big thanks to Tim and the moderators for allowing us to respond to reviews.

I've had people tell me directly to my face that they did not enjoy the course or thought it was quirky. That is fine and is expected from time to time.What I have a hard time dealing with is theft, vandalism and personal vendettas.
 
One thing I have always liked about this site is that every level of disc golfer has a voice (from noob all the way to the pros and leaders in the industry).

I think DGCR and the rating/review system does a great job of doing its primary goal. That being said, a lot of the ratings issues seem to revolve around the prestige and free adverstising/publicity about getting your course in the top 10. Are private course owners complaining that their course is not showing up on the homepage really a good reason to change things? I appreciate their work and passion and have played many private courses, but there are other ways to gain an interest in your course, and DGCR does a pretty good job of it already. Besides, when you click on "more top rated" courses (which is my main point below), their are already special filtered lists for "private" and "pay-to-play". You know what is not there?--- PUBLIC.

So I like the op idea of having a few more top rated options. Why not add 1) weighted average [I do like that concept] 2) TR's only 3) non-TR's only 4) non-private 5) free to play, etc? And maybe make the link under the main top 10 a little more apparent. I know a lot of this can be done simply by sorting the course database, but a neat little button click is much less work!
 
below is an excel chart showing the data set refined for the 44 courses I evaluated. There would be a lot of changes compared to the top 25 that uses equal weighted numerics.

The formula for this graphic is as follow; 1 to 4 reviews, weight 1. 5 or more non TR, weight 5. TR, weight 10.

The chart has similar results to my first graphic. Maple Hill, Highbridges, Spearfish, etc. are more heavily favored by those that have reviewed a lot of courses. Where as courses like Smugglers, Phantom and Sabatttus have a higher than normal concentration of drive-by 5s without a vendetta review yet. exception, phantom has one vendetta review, but its offset by 63 drive by 5s!

data key
w ave = weighted ave
w rank = weighted rank
c ave = current ave (unweighted)
c rank = current rank (unweighted)
change = change in rank between data sets
for each reviewer category there are three columns
they are, weight value , # of reviews, and average score for that category

this is just data. I not advocating for or against any changes. However, if it is at all possible, i would like to see a critics top 25 list. No other site does anything remotely like that. I have found in general that those that take reviewing seriously have a more of a knack for suggesting courses I enjoy. perhaps that's just me. There seems to be a lot gang homer-ism in the course rankings on other competing sites and its completely unreliable in helping me target top courses. I recently played a 4.5 out of 5 rated course from another site that was nine holes, no tees and open.

attachment.php
 

Attachments

  • weighted excel.jpg
    weighted excel.jpg
    154.9 KB · Views: 109
What do you mean by a critic's top 25? Like a TR only list?


Also, what sort of changes do you see if you only drop weight for people that have less than say... 5 reviews?
 
I've never seen that graphic before... I usually look at the threads in the suggestions area so if it was in this forum the first time around, it's probably why I missed it.


I think it's interesting although timing out reviews wouldn't really work unless I had a mechanism that said "x course was improved on this date". Some courses just stay the same so those old reviews are still pretty valid.


The idea of reducing weight for people that have less than 5 reviews seems the like the best balance of avoiding homers / vendetta reviews. Increasing the weights of TR reviews I feel could be gamed to some extent. And someone that's bronze maybe just lives in an area where there aren't many courses so they don't have an opportunity to write more reviews. I don't think they should be penalized for where they live or their lack of ability/desire to travel.

I mostly agree with this, except that you're somehow penalizing those reviewers that don't have access to many courses, or aren't willing to travel. Sometimes situation dictates what is possible.

You could get super complicated with it and start introducing multipliers based on how many states (or countries) a person has played.

To me, the most interesting part of it is how ratings would change as reviewer status changed.
 
Yes, a critics top 25 could be some sort of TR list, or hybrid of that. I think a min of 5 TR reviews would be enough to catch most top courses. Of the 44 I've selected to run data on, only 3 courses have less than 5 TR reviews. Spearfish 1, Muddy run 4 and Base Camp 4.

On a re-run of the numbers and only devaluing the "less than 5 reviews" I get similar numbers with some exceptions. Sabattus isn't negatively impacted as much. Riverbend is moves way up now and Claiborne now moves much lower.


What do you mean by a critic's top 25? Like a TR only list?


Also, what sort of changes do you see if you only drop weight for people that have less than say... 5 reviews?

attachment.php
 

Attachments

  • weighted3.png
    weighted3.png
    104.9 KB · Views: 90
Interesting how very slightly the actual ratings change (for the most part)
---yet, because the rankings are sorted in the hundredths of a point, they get shuffled.

So it's a huge deal when a course increases from 4.61 to it's rightful 4.63.
 
From running all these data sets I've realized that 6 the current top 7 are pretty much in the top 10 no matter how you slice it (exception is Harmon Hills). However after that there's some major changes. Mostly like you stated, they just get shuffled around a little bit, But there are a handful of courses that have been greatly impacted by driveby 5s and vendettas 0.5s in this data set and im sure there are hundreds more when factoring in all 6140 listed courses.

Interesting how very slightly the actual ratings change (for the most part)
---yet, because the rankings are sorted in the hundredths of a point, they get shuffled.

So it's a huge deal when a course increases from 4.61 to it's rightful 4.63.
 
I'm not knocking it---it's more an observation on how tightly the Top 25 is packed, and how little difference there is between courses ranked 10 spots apart.

So anyway you filter it, some course will jump into the top 10, some other course will fall out, even though their ratings are virtually identical. (This is probably true of many things for which there are thousands of entrants, and you try to rank the top 10. Not a big gap between #9 and #29).
 
...it's more an observation on how tightly the Top 25 is packed, and how little difference there is between courses ranked 10 spots apart.

Once you get into the top 10 (or perhaps even the top 25), while it's easy to do mathematically, you're essentially splitting hairs trying to rank them. I'm not saying they're all the same - they're not. Many of them are quite different, and I say:
vive la difference!
 
If we can learn anything from this thread, its that the existing system, whatever it's faults, works as good as circumstances will allow, and whatever we may propose to fix it, does little to change the status quo.

This is why I don't gospelize these numbers that much. They are after all a quantified result derived from a salad of glorified opinions. In an alternate universe, we might see somewhat the same batch of top 100 courses in a somewhat different order.
 
Once you get into the top 10 (or perhaps even the top 25), while it's easy to do mathematically, you're essentially splitting hairs trying to rank them. I'm not saying they're all the same - they're not. Many of them are quite different, and I say:
vive la difference!

Yes.....I should have said, little difference, rating-wise.
 
For your second data set, what weight did you give the < 5 crowd? .5?


A critics list would be challenging just because the way I display TRs on the site is based on a formula rather than a flag. I guess once they hit bronze I could just add a TR flag to their account which would make life easier as I'd no longer be trying to hit a moving target. It's possible but very, very rare that people actually lose TR status.
 
the weighted factor for all three graphics. (photoshop image, excel graphic, excel graphic 2)

data set 1, .1 weighted for those with less than 5 reviews
data set 1, .2 weighted for those with 5 or more reviews and not a TR.
data set 1, .4 weighted for those bronze
data set 1, .6 weighted for those silver
data set 1, .8 weighted for those gold
data set 1, 1 weighted for those diamond

data set 2, .1 weighted for those with less than 5 reviews
data set 2, .5 weighted for those with 5 or more reviews and not a TR.
data set 2, 1 weighted for those TR

data set 3 .1 weighted for those with less than 5 reviews.
data set 3 1 weighted for everyone else.

For your second data set, what weight did you give the < 5 crowd? .5?


A critics list would be challenging just because the way I display TRs on the site is based on a formula rather than a flag. I guess once they hit bronze I could just add a TR flag to their account which would make life easier as I'd no longer be trying to hit a moving target. It's possible but very, very rare that people actually lose TR status.
 
How about a separate ranking system that was based on "which course would you rather play?" It's been a looong time since I took a statistics class so IDRC what this system is called but I *think* it's a thing. Every time you go to rate a new course, it would go down the list of courses you've played and one by one you choose whether you like that one or the new one more. Everyone's comparisons get aggregated, then the # of discs can be assigned by the resulting rankings. The one that ends up at the top of the list is 5 discs the one at the bottom is zero.

Because reviewers regions overlap every course would (hopefully eventually) be compared to every course by some degrees of separation and regional flattening would hopefully be compensated for.

You could have written reviews not be a requirement to contribute to the rankings, which could be a negative but might encourage a lot more people (like me) to contribute to them and reduce outliers caused by low numbers of rankings.

For someone to give an crazy high ranking to their local baskets-in-an-open-field course they would have to pretty brazenly say they prefer it to some other high ranked course. IOW it takes outright dishonesty to throw the rankings as opposed to innocent enthusiasm for their new local course.

An extremely simplified example of how this works: 3 reviewers, 4 courses, each reviewers has played a different pair of courses
- first reviewer has played coarse A and B says they like A more than B, A is now 5 in the overall ratings and B is 0
- Second reviewer has played B and C and says they like B more than C, A is now 5, B is 2.5 and C is 0
- Third reviewer has played C and D and says they like C more than D, A is now 5, B is 3.33, C is 1.66, and D is 0


You would get weird things at first when there are missing connections but I would think that would go away pretty quickly as connections are made. One hitch I can see is when a new reviewer reviews a new course. Then you can have a "floater". Say they have only played C and E and they prefer E. All you know is that E is better than C but you don't know where it goes in comparison to A and B until you get some connecting comparisons.
 
Last edited:
How about a separate ranking system that was based on "which course would you rather play?" It's been a looong time since I took a statistics class so IDRC what this system is called but I *think* it's a thing. Every time you go to rate a new course, it would go down the list of courses you've played and one by one you choose whether you like that one or the new one more. Everyone's comparisons get aggregated, then the # of discs can be assigned by the resulting rankings. The one that ends up at the top of the list is 5 discs the one at the bottom is zero.

Because reviewers regions overlap every course would (hopefully eventually) be compared to every course by some degrees of separation and regional flattening would hopefully be compensated for.

You could have written reviews not be a requirement to contribute to the rankings, which could be a negative but might encourage a lot more people (like me) to contribute to them and reduce outliers caused by low numbers of rankings.

For someone to give an crazy high ranking to their local baskets-in-an-open-field course they would have to pretty brazenly say they prefer it to some other high ranked course. IOW it takes outright dishonesty to throw the rankings as opposed to innocent enthusiasm for their new local course.

An extremely simplified example of how this works: 3 reviewers, 4 courses, each reviewers has played a different pair of courses
- first reviewer has played coarse A and B says they like A more than B, A is now 5 in the overall ratings and B is 0
- Second reviewer has played B and C and says they like B more than C, A is now 5, B is 2.5 and C is 0
- Third reviewer has played C and D and says they like C more than D, A is now 5, B is 3.33, C is 1.66, and D is 0


You would get weird things at first when there are missing connections but I would think that would go away pretty quickly as connections are made. One hitch I can see is when a new reviewer reviews a new course. Then you can have a "floater". Say they have only played C and E and they prefer E. All you know is that E is better than C but you don't know where it goes in comparison to A and B until you get some connecting comparisons.

First post and it's beating a dead horse. Wish someone had told you to save your energy before writing all this.
 

Latest posts

Top