• Discover new ways to elevate your game with the updated DGCourseReview app!
    It's entirely free and enhanced with features shaped by user feedback to ensure your best experience on the course. (App Store or Google Play)

Weighted Reviews

If I correctly understand your proposal, does that mean someone like myself, with 300 course under my belt, or my course bagging buddy with 700+ courses would have to perform a few hundred "A vs. B" head to head every single new course we choose to play???

If so, you're sorely mistaken if you think that's gonna happen.
yeah, no, I don't think that's going to happen. There might be ways around that. The rating tool could just pick some number of random ones to A/B. Or it could use your initial answers to hone in on the vicinity of the ranking quickly and thus be able to skip most of the comparisons.

I'll repeat that this is just an idea I thought might be fun to kick around, not something I seriously think might get adopted by this site.


....anytime soon :)

ETA: and certainly not a replacement for written reviews.
 
Last edited:
The way I see it, rating a course already gives some sort of comparison between all courses I've played. I think this holds for any reviewer who has reviewed more than a handful of courses. It would be redundant for me to say "I prefer course A over course B" when I've already rated one 4.0 and one 3.5.

Nevertheless, rating and ranking courses in that way simply doesn't cut it. Courses are so different, especially when comparing across different regions, that I still value the review more highly than the number in every case. When getting to a new region, I've often found a trusted reviewer that I find I can --- pardon the obvious word play -- trust. In Florida, it was reposado. The rating numbers actually meant very little. What mattered was gathering significant information from a trustworthy source who had played all the courses in the region in addition to many more nationwide.
 
How about a separate ranking system that was based on "which course would you rather play?" It's been a looong time since I took a statistics class so IDRC what this system is called but I *think* it's a thing. Every time you go to rate a new course, it would go down the list of courses you've played and one by one you choose whether you like that one or the new one more. Everyone's comparisons get aggregated, then the # of discs can be assigned by the resulting rankings. The one that ends up at the top of the list is 5 discs the one at the bottom is zero.

Because reviewers regions overlap every course would (hopefully eventually) be compared to every course by some degrees of separation and regional flattening would hopefully be compensated for.

You could have written reviews not be a requirement to contribute to the rankings, which could be a negative but might encourage a lot more people (like me) to contribute to them and reduce outliers caused by low numbers of rankings.

For someone to give an crazy high ranking to their local baskets-in-an-open-field course they would have to pretty brazenly say they prefer it to some other high ranked course. IOW it takes outright dishonesty to throw the rankings as opposed to innocent enthusiasm for their new local course.

An extremely simplified example of how this works: 3 reviewers, 4 courses, each reviewers has played a different pair of courses
- first reviewer has played coarse A and B says they like A more than B, A is now 5 in the overall ratings and B is 0
- Second reviewer has played B and C and says they like B more than C, A is now 5, B is 2.5 and C is 0
- Third reviewer has played C and D and says they like C more than D, A is now 5, B is 3.33, C is 1.66, and D is 0


You would get weird things at first when there are missing connections but I would think that would go away pretty quickly as connections are made. One hitch I can see is when a new reviewer reviews a new course. Then you can have a "floater". Say they have only played C and E and they prefer E. All you know is that E is better than C but you don't know where it goes in comparison to A and B until you get some connecting comparisons.


The bolded is basically what I do to determine numerical score for each course rating. I sort my reviewed courses by rating, and scroll down the list, trying to decide which "tier" best fits the newly played course. Tiers generally correspond to the ratings options presented on this site, although there are a few tiers that get combined into a single rating (for example, I have two distinct tiers lumped into the 4 disc bucket).

I had to do a fairly major rating cleanup a few years back after playing Maple Hill. With a new idea of what a 5 disc course was, I had a whole new tier at the top and had to remap my existing tiers against the dgcr rating scale. The two courses that I had previously rated as 5s got bumped down to 4.5, the best of the 4.5s held their rating and the lesser 4.5s got downgraded. This continued down the list until my tiers again lined up with the dgcr rating scale. Of course, I couldn't adjust ratings for extinct courses, which leaves a few extinct courses "overrated".

Other than the major cleanup, I've gone back and adjusted ratings on a handful of courses to true up the scales on one occasion. Outside of those two instances, my tiers have remained static. Using tiers has made it fairly painless to have a regularly updated ranking of courses. I would never attempt readjustments to my rankings if I had to rank all the courses individually.
 
i do the same thing for courses in the 2.0-3.0 range. outside of that, it's pretty obvious to me where i'm going to rank/rate a course.

and i also have had to go back and readjust the whole scale after i got out into the rest of the country. CO and the Great Lakes changed everything. i expect it to happen again to some degree once i get to check out New England.
 
The way I see it, rating a course already gives some sort of comparison between all courses I've played. I think this holds for any reviewer who has reviewed more than a handful of courses. It would be redundant for me to say "I prefer course A over course B" when I've already rated one 4.0 and one 3.5.

Nevertheless, rating and ranking courses in that way simply doesn't cut it. Courses are so different, especially when comparing across different regions, that I still value the review more highly than the number in every case. When getting to a new region, I've often found a trusted reviewer that I find I can --- pardon the obvious word play -- trust. In Florida, it was reposado. The rating numbers actually meant very little. What mattered was gathering significant information from a trustworthy source who had played all the courses in the region in addition to many more nationwide.

Meanwhile, over in the Movement in the Top 10 thread, people are getting their undies in a bunch over hundredths of a rating point.
 
Meanwhile, over in the Movement in the Top 10 thread, people are getting their undies in a bunch over hundredths of a rating point.
Somewhere, somebody's scheming to write a program that collects the thousandths of a rating point, from aalllll the courses on the site, and funnel them toward their favorite course.


Basically, DGCR's very own version of Office Space. And before you know it, some mehtastic course out in BFE suddenly cracks onto the Top 10.
 
Somewhere, somebody's scheming to write a program that collects the thousandths of a rating point, from aalllll the courses on the site, and funnel them toward their favorite course.


Basically, DGCR's very own version of Office Space. And before you know it, some mehtastic course out in BFE suddenly cracks onto the Top 10.

...with a 523 disc rating.
 
Maybe somebody can come up with a formula that results in higher-par courses automatically getting higher rankings, and we can crash the whole thing.
 
Meanwhile, over in the Movement in the Top 10 thread, people are getting their undies in a bunch over hundredths of a rating point.

Discuss what is par for your next post. Your post has been discussed intro the ground since essentially day one of the site and nothing has changed. That's all.

Top 10 is one thing, and par is another. I'd say if you want to enjoy the full richness of DGCR, it's worth a foray into the ageless discussion of what to do with found discs. It's even more invigorating in 2019 than in 2010!
#growthesport
 
Maybe somebody can come up with a formula that results in higher-par courses automatically getting higher rankings, and we can crash the whole thing.

There's a new listed course, Chetola Resort in Blowing Rock. Whoever had posted the course had accidentally entered the hole lengths into the par columns. Until it was fixed, par was 1,439.
 
There's a new listed course, Chetola Resort in Blowing Rock. Whoever had posted the course had accidentally entered the hole lengths into the par columns. Until it was fixed, par was 1,439.

I've seen a lot of formulas for setting par, including distance-based, but....
 
Maybe somebody can come up with a formula that results in higher-par courses automatically getting higher rankings, and we can crash the whole thing.

It is amazingly close to that already. More holes and harder holes seem to be the main drivers of course ratings.
 
Last edited:
That's why I put the "automatically" in there. Say, multiply the consensus opinion rating times the course par, giving the higher-par courses and extra 10 or 15% boost. Then we could argue about the reviewers opinions and whether par was set "correctly", all at once!
 
The site does that, once a year. A Top-25 for courses, based on reviews in that calendar year.

It's interesting, though you see mostly the same names, shuffled around a bit. You don't see 4.6-rated courses coming out with a 3.3, or vice versa.

I've advocated (dreamed of) a feature where users can parse the reviews with whatever options they want, and create customized Top-10 lists. I gather it's more work than I imagine.

I guess what I am really suggesting is that a "Hot 10" replace the top ten on the home page of the site to take some emphasis off being in the top 10. It would be nice to have different courses highlighted for people who don't do deep dives into the course search function or "More Top Courses" page. My other though is that it might encourage people to continue to review top courses even then they think everything has been said in previous reviews because the older of those reviews won't help keep top courses on the home page. Just a thought. I have no idea if that would work as intended.
 
That's why I put the "automatically" in there. Say, multiply the consensus opinion rating times the course par, giving the higher-par courses and extra 10 or 15% boost. Then we could argue about the reviewers opinions and whether par was set "correctly", all at once!

Great, one more reason to set par too high.

How about divide by par?
 
Great, one more reason to set par too high.

How about divide by par?

But that's what makes it a great idea! We double the argument by ranting that a course is Top 10 because the pars are too generous (on top of the hometown reviewers).
 
Nevertheless, rating and ranking courses in that way simply doesn't cut it. Courses are so different, especially when comparing across different regions, that I still value the review more highly than the number in every case. When getting to a new region, I've often found a trusted reviewer that I find I can --- pardon the obvious word play -- trust. In Florida, it was reposado. The rating numbers actually meant very little. What mattered was gathering significant information from a trustworthy source who had played all the courses in the region in addition to many more nationwide.
I do something like that if I run across a review that speaks to me, I'll read the other reviews from that person in the area. But afa that particular manor of rating not cutting it, couldn't you just as well say that about the current system? or maybe you are saying that. ...But If you're looking for new courses to play, don't you start out by looking at the higher rated ones in the area first? If I'm going in a road trip somewhere and hope to hit a course in the area or on the way, I look at my route on the DGCR map and see what's there. If there are a lot, I might filter out all the ones below X.X rating and start looking at what remains.

The way I see it, rating a course already gives some sort of comparison between all courses I've played. I think this holds for any reviewer who has reviewed more than a handful of courses. It would be redundant for me to say "I prefer course A over course B" when I've already rated one 4.0 and one 3.5.
I get what you mean but with the half disc units you can only put courses in tiers so you're going to end up with bunches of courses ranked the same numerically in your individual list. That's just fine on it's own but idk how much it would mathematically gum up the works if you tried to connect everybody's rankings together, which is the idea.

Maybe it still works out ok? If so it might be possible to do a half-assed version of this system with the data that's already on here. The algorithm could take each reviewer's ratings as tiers of preference and discount the actual number value of the ratings. If someone only has two reviews, one's a 1 and one's 4.5, it would only count that they prefer/recommend/rate-higher (however you want to think about it) the second one. Reviewers with only one review couldn't be counted.

I'll reiterate what I was thinking the positives of this system might be

- It might reduce outliers because you can only say a course is better or worse than another course.
- It would spread the ratings evenly out across the full number spectrum. (Grading on a curve basically. There'd be the same number of course rated 0 to 1 as 1 to 2, and 2 to 3 and so on). This may not accurately reflect the quality of the courses (an argument for having both kinds of ratings maybe?) but it would make the collective preferences easier to distinguish.

I don't know but I suspect these things together would mostly squeeze the hump in the bell curve toward the bottom end and a lot of the baskets-around-a-soccer-field type courses would go down. It may not make a huge difference when considering all courses everywhere at once but when you look at smaller areas it might.

Ok, I think I'm done pitching this for now. Sorry for the interruption. You can go back to discussing par. Me, I'm such a casual (and bad) player I haven't bothered to keep track of my score in 15 years.
 
Ok, I think I'm done pitching this for now. Sorry for the interruption. You can go back to discussing par. Me, I'm such a casual (and bad) player I haven't bothered to keep track of my score in 15 years.
Apparently it's already to late to edit this post and add a smiley face but in case the tone is lost, this is meant to be read as (attempted) humor not bitterness. I realized after the fact it might sound like the latter.
 
Top