• Discover new ways to elevate your game with the updated DGCourseReview app!
    It's entirely free and enhanced with features shaped by user feedback to ensure your best experience on the course. (App Store or Google Play)

Trusted Reviewer - Groupthink

Which TR do you trust the most? (more than one choice allowed)


  • Total voters
    66
It says that this percentage of reviews are within 1/2 disc of the DGCR average:



My conclusions are, either:
1) TR's do not stray far from the published average
2) Non-TR's take their cues on rating from TR's (*unlikely)
or
3) TR's ratings are no more meaningful than the DGCR average

*If most courses had lots of TR reviews, this could be a valid conclusion. But, I would TR's are in the solid minority of reviewers for most courses (especially early on while a course gets its average built).

I still maintain that the numbers would look very different if you looked at the rest of the data. I would have guessed before you started this project that you'd see the least variation at the top of the scale between TRs and the average.
 
I still maintain that the numbers would look very different if you looked at the rest of the data. I would have guessed before you started this project that you'd see the least variation at the top of the scale between TRs and the average.

Why would they look different in the 2.5-3.5 range?

I'm curious as IMO, I have been to very few 3.5+ courses that I regret spending precious time playing while I have been almost completely unimpressed by every course I have played that is rated <3.0. In other words, the general average works great for me so how do the efforts/effects of TR's in that range make a difference?

Why would they match on the high end more?
 
I am not sure what this says about me

Like Mashnut, BrotherD, and others pointed out very early on, these numbers do not pinpoint a cause for them. They just are what they are.

But, it would be very hard for anyone to argue against the premise that you have a system that you follow that you are true to whether or not the general consensus agrees with you or not.

That makes me think of one last thing I'll do before I hit the hay.....look at :thmbup: & :thmbdown: and see if there is any correlation between the agreeability of ratings with the general public.
 
Why would they look different in the 2.5-3.5 range?

I'm curious as IMO, I have been to very few 3.5+ courses that I regret spending precious time playing while I have been almost completely unimpressed by every course I have played that is rated <3.0. In other words, the general average works great for me so how do the efforts/effects of TR's in that range make a difference?

Why would they match on the high end more?

I would agree with the first part of your statement, there are few 3.5+ rated courses that aren't at least good courses, so the TRs tend to be pretty close to the average. Below that though, a lot of courses that should be down in the 0-2 range get pushed up into the middle and you lose the distinction between merely average and actually bad courses. I haven't spent the time gathering the data so I have no idea whether TRs contribute to that compression and inflation or not, but I think it would be interesting to look at.
 
This is all opinion and in hindsight I have changed my opinions on many courses after being exposed to more.
There are many things that contribute to an enjoyable round and many things that hinder a good time.
I tend to not really trust anyone to be honest. I've been let down and surprised too many times in the past.
I think generally the site gets it right but there are courses and reviewers who are overrated and underrated.
I'd consider myself trustworthy for my region.
I have found that there are different types of courses and different reviewers.
In the end you try to make sense of it all.
I don't think TR are any more trusted than the avg reviewer it's more we've put in more work.
Some of us have a lot of avg work that's mounted up to medal status
 
Last data dump of the day. Is there correlation between thumbs and agreeability?

Code:
[B]Top 20 TR	Up	Down	Ratio	w/in 0.5 discs[/B]
AdamH.,.,.	974	30	3%	95%
JR Stengel	1475	57	4%	95%
mashnut,,,	4174	286	7%	94%
tallpaul..	1062	65	6%	92%
srm_520,,,	1314	41	3%	88%
GoodDrive,	1430	77	5%	88%
harr0140..	2435	130	5%	87%
Denny Rit.	646	149	23%	84%
bjreagh,,,	1230	143	12%	83%
#19325,.,.	1079	55	5%	83%
Jukeshoe.	1081	64	6%	83%
swatso,,,,	1463	145	10%	82%
The Valkry	2536	186	7%	81%
DSCJNKY.,.	1058	69	7%	81%
Innovadude	949	340	36%	79%
ERicJ,.,.,	1459	64	4%	79%
optidiscic	1337	92	7%	77%
sillybizz.	1092	96	9%	75%
prerube,,	996	193	19%	75%
mndiscg,,,	398	46	12%	75%
Donovan,,,	1684	66	4%	50%
 
I haven't spent the time gathering the data so I have no idea whether TRs contribute to that compression and inflation or not, but I think it would be interesting to look at.

I found it interesting that according to this data, it appears TR's add to ratings inflation on the courses rated 4.0+
 
I don't think TR are any more trusted than the avg reviewer it's more we've put in more work.

To me, it is becoming more and more clear that the TR status is the most useful to filter out the Untrusted Reviewers when wanting to weed through the reviews on courses that have tons of reviews.

Unfortunately, a lot of good reviews get filtered out when that button is pushed.

Maybe timg should come up with a UTR tag (would not need to be visible) that upon a button click would filter out all those with high :thmbdown:'s count, low reviews written, and short experience. Mine would get filtered out.....oh well.
 
Last data dump of the day. Is there correlation between thumbs and agreeability?

you know I don't find this as meaningful. The thumbs down dont go away if you re-write a review. So some could have written some cruddy reviews early on and then went back and decided to invest more into each review. The first several reviews I ever did on this site were awful and the majority of my overall downs came during that time. then one winter i was bored and went back and re-wrote them and added several more reviews that were much much better. The new ones hardly got any downs.

Like I said I don't trust TRs over the norm.

the main difference between TR and avg....quality of the writing. most TRs are very descriptive and cover the amenities side (this has become more important to me over the years).
 
Last edited:
I would like to see three stats added: average rating given, standard deviation of ratings given, and average length of written review. This would be more informative and trust inspiring than stats that show most of the 20 top reviewers consistently overrate 4-star courses. I was tempted to check the one with the strongest tendency to underrate, but I didn't.
 
i picked R just because of column 7, most under the average. i like reviews to be critical. that's why denny ritner is one of my favorite reviewers.

I picked R too.

I'd rather get honest, objective reviews and ratings, not ones that are always aligned with the masses. Something about the 'road less traveled' appeals to me.
 
I think the question here is: does agreeability = trustworthiness? That's why we have the helpful voting system I suppose. I'd rather read a review that has something new to say than one that just affirms what everyone else has said. That's why I'll often scroll to the very first reviews of a course.
 
it just occurred to me that the "Over Rate" stat is incomplete/inaccurate/meaningless. What I did was looked at 4.0 rated courses for each top 20 TR. So, if they rated any course below 4.0 that had a DGCR average of >4.0 that was not included.

So, maybe they have slammed a ton of 4.0+ rated courses....but that does not show up here. I doubt it is the case base on another analysis I am working on right now and will post. But, take that number with a big ole grain of salt.

I did a spot check. I checked the 5 reveiwers who had a large number of courses in IL & WI rated. I made sure all 4.0+ courses in each state were included in my search. Then I grabbed all of their courses rated all the way down to 3.0.

I added any course that I had missed before (courses with DGCR ratings of 4.0+ that they rated <4.0. This is what I came up with:

Code:
[B]Reviewer	Courses	TR	DGCR	Diff	Course[/B]
#19325..	24	3.5	4	-0.50	Fox River Park - Grey Fox
TallPaul..	12	3.5	4.27	-0.77	Standing Rocks
Harr0140	22	3.5	4.16	-0.66	Sinnissippi Park
Mashnut..	27	3.5	4	-0.50	Fox River Park - Grey Fox
Mashnut..		3.5	4.21	-0.71	Tower Ridge Park - I
Mashnut..		3.5	4.18	-0.68	Camden Park - II
Mashnut..		3	4	-1.00	Woodland Park
Mashnut..		3	4.21	-1.21	Justin Trails Classic DGC
JukeShoe	10				has none

So, out of a total of 95 courses, 8 are now added that were not before.

Without these courses, these TRs add an average of 0.07 to ratings inflation (enough to round up to the next half disc about 15% of the time).

With these courses, these TRs add an average of 0.003 ratings inflation (basically none....but they do not bring down ratings inflation)

Conclusion: TR's do not add to ratings inflation of the top courses that people care most about....and they do nothing to bring it down either.
 
I don't really care so much what rating is given to a course as the reviewers take on the pros and cons. I personally have only played a handful of courses I would rate as 5, but it is obviously subjective. I know what kind of amenities and course I like and want to try. The pros and cons with course description provide me with the most complete picture.
 
Last data dump of the day. Is there correlation between thumbs and agreeability?

Code:
[B]Top 20 TR	Up	Down	Ratio	w/in 0.5 discs[/B]
AdamH.,.,.	974	30	3%	95%
JR Stengel	1475	57	4%	95%
mashnut,,,	4174	286	7%	94%
tallpaul..	1062	65	6%	92%
srm_520,,,	1314	41	3%	88%
GoodDrive,	1430	77	5%	88%
harr0140..	2435	130	5%	87%
Denny Rit.	646	149	23%	84%
bjreagh,,,	1230	143	12%	83%
#19325,.,.	1079	55	5%	83%
Jukeshoe.	1081	64	6%	83%
swatso,,,,	1463	145	10%	82%
The Valkry	2536	186	7%	81%
DSCJNKY.,.	1058	69	7%	81%
Innovadude	949	340	36%	79%
ERicJ,.,.,	1459	64	4%	79%
optidiscic	1337	92	7%	77%
sillybizz.	1092	96	9%	75%
prerube,,	996	193	19%	75%
mndiscg,,,	398	46	12%	75%
Donovan,,,	1684	66	4%	50%

At first glance, it looks like almost no correlation there. My gut feeling (again without actually going and getting the data) is that you'd find much more correlation if you compared helpful percentages to average over- or under-rating, or even just the average rating given out by those reviewers.
 
Top