I am great confidence in these statistics, and the results for people who are wise enough to understand them - and take them seriously.
Here's the other thought: statistics are not persuasive.
There are two reasons for this. One, I'll call "reasonable" and the other "unreasonable".
The reasonable reason for being unconvinced by statistics is that statistics are extremely error-prone.
Here are a few illustrations. Remember the exit polls in the U.S. 2004 elections? Professional pollsters "goofed" (or so they say). Remember how confident Romney's pollsters were about a victory in 2012? You know how one year, "scientific studies" show that nutrient X helps prevent disease Y, and a couple years later, "scientific studies" show that nutrient X increases your chance of getting disease Y? These mistakes happen because bias creeps into statistics in a huge number of ways, which are extremely hard to catch, even by professionals.
Another problem is that statistics (technically, "parameters"—the things that you infer from statistics) try to describe very large numbers of things (people using a medical treatment, people earning money, people using plus lenses, etc.) using only a few numbers, like mean and variance. This means that you lose vast amounts of information whenever you use statistics.
For parameters to be meaningful, you have to assume that the population they describe follows a certain "distribution" that is mathematically easy to work with. The most famous distribution is the "normal distribution" or "bell curve". For many populations in nature, the normal distribution is a good-enough approximation, but for many, it's not. In many populations, there are multiple trends, subgroups that behave differently from a main group, factors causing the distribution to be skewed one way or the other, etc. (Examples provided on request.)
Have you ever tried to cook a recipe and had it come out terrible, even though you thought you did everything just right? After some experimentation, maybe you find some tricky little thing that the recipe didn't spell out explicitly or that you misunderstood, which makes all the difference. The same thing happens all the time in engineering and science. Lots of tiny little factors make a big difference. In polling, tiny, apparently insignificant variations in the wording of a question in a survey can swing the results between opposite conclusions (with p < 0.01). Statistics don't normally talk about this kind of stuff, because it's very hard to talk about and research. The complexity of causal factors in any practical matter makes many statistical results, even valid ones, inapplicable in practice.
Another problem is that using statistical methods correctly is just plain tricky, even for people who are good at math.
For example, the statistical fallacy described in the previous message is extremely common. Statisticians loudly complain about that fallacy, but professional scientists make it all the time in journal papers, and the papers pass peer review.
The number of ways that statistical results can go wrong goes far beyond what I've said here. Statisticians have looked into misuse of statistics and found problems of amazing severity and frequency. This article in The Atlantic
provides a nice overview of the work of John Ioannidis, a statistician who looked into statistical errors in leading medical journals. "Of the 49 [most highly regarded medical articles published in the last 13 years], 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated."
Sad to say, when someone tells me that they have statistics that "show" some "result", I usually ignore them, and I think I'm right to do so. I view statistics as sometimes an important supplement to other reasoning, but not reasonably persuasive by themselves. When people trumpet statistics as "proving" something without bias and with greater certainty than can be achieved by common-sense reasoning, I hear that as an attempt to bully me into a conclusion that the speaker knows full well is not justified. When people have a reasonable case to make for a conclusion, they just make it; they don't have to resort to that kind of tactic.
For the unreasonable reason why people are unpersuaded by statistics (and by reasonable arguments of all kinds), I refer you to the Asch conformity experiments
, which proved statistically that 75% of humanity just follow the rest of their tribe and don't think for themselves. Just kidding, of course, but there is certainly a grain of truth in that. It looks like you're onto this, too:
I know that few people actually LOOK at facts and science.
The real lesson here, for anyone who wants to spread a genuinely good idea, is that persuasion is itself a job that requires thought, dedication, and creativity, no less than coming up with or recognizing the good idea in the first place. The fact that an idea is good does not by itself give it much strength in the marketplace of ideas.
Statistical support might help and might hurt, depending on how you use it.