I'm gonna get myself in trouble, but what the hay.
You can get some pretty significant results with very small sample sizes, if you understand statistics. It just depends on the data and what you're looking at. I know lots of folks hate statistics these days, but that came about because statistics often disagreed with what folks wanted to hear. Generally speaking, statistics match reality, and what folks want to hear doesn't. That's a bad situation.
If Steve did his statistics, using standard methods with the correct values, he will get a clear answer as to whether he had a large enough sample size and whether his result is meaningful. Besides the fact that Steve is an actuary (a very special type of statistician), statistical results have been balanced against real results for a very very long time and have been shown to be highly consistent. I suppose you could argue that Steve is... misleading us, but in real science, you publish your numbers (I think Steve does this) and anyone with a statistical calculator can check them.
When Steve tells us that something isn't statistically significant, presumably, he takes into account sample size (methodology does that) and that tells you how much you can count on the result. If the sample size is too small, then your margins of error become untenable, and the result is, you can't tell. Steve would presumably reveal that.