Author Topic: Statistics, Science and Facts.  (Read 4071 times)

Offline OtisBrown

  • Hero Member
  • *****
  • Posts: 1734
Statistics, Science and Facts.
« on: October 11, 2013, 03:55:57 AM »
Subject:  I know that few people actually LOOK at facts and science.

I appreciate how easy it is to "fix" a person with an over-prescribed minus lens.  There is no doubt that few people wish for prevention or will wear a plus (full time) when their Snellen is 20/40 to 20/60.  (And a prescription of from -1.5 to -4 diopters).

Tragically, the OD in his office judges you exactly that way.  Once he makes your vision very sharp with a strong minus - he feels he has done a "perfect job".

The problem?  Scientific tests show that a minus "induces myopia", and cures nothing at all.  While I "argue" for prevention, I agree that prevention must start before you start wearing a minus lens.

I will present the facts that show that a plus can be effective - in my next post.

Offline OtisBrown

  • Hero Member
  • *****
  • Posts: 1734
Re: Statistics, Science and Facts.
« Reply #1 on: October 11, 2013, 04:29:18 AM »
Subject: Raw Statistics of Prevention.

Up to the present time, it has been impossible to conduct a "pure plus" study, for the exact reason that few people will intentionally wear a plus (at 20/40, and -1.0 diopters), in a scientific experiment.

The ONLY source of data, regarding the effect of a plus, must come from "bi-focal" studies.  This is because you can force the child to wear a strong plus, by prescribing a minus (that fixes distance), with a plus "on the bottom".  In this study there was NO attempt to get the child to "sit up" and wear the plus at the correct distance.  This is tragic, because you do not get the full effect of the plus - unless you do it that way.  With NO instruction, the child leans forward, which effectively cancels out most of the desired effect of the plus.  But even so, the statistics show the "plus" was highly effective.

In statistics, a result of 0.05 is called, "significant".  A result of 0.01 is called, "highly significant".  I compiled the results for each year of this "plus study".  They show results of 0.00001 to 0.00000001 for the effect of a plus on the natural eye. For the "technical mind" of an engineer, or researcher, I would recommend down-loading "Vis_Young9" and review the data.  Here is the blog:

http://myopiafree.wordpress.com/study/

This is what I propose.  It explains the reason why Todd and others were successful.  It is the basic science of prevention for people who have the motivation for it.

While I have mentioned "down rates" of -0.66 to -0.5 diopters per year, I don't think anyone believes me in that statement.  I don't know why that is so.  But that is the true issue for a person to face - who wishes to get out of -1.0 to -1.5 diopters, when entering a four year college.

Any study that does not "inform" a person of that proven effect of "long-term near" - will certainly fail.  A person must understand this issue to be fully intellectually challenged, and in "control" of his own study.  If you look at the first, "Frequency Table", you will see the control-group going down by -0.33 diopters, and the test group going up by +0.75 diopters, or most reaching 20/20 (refractive status = 0.0 diopters) in about nine months.

I am great confidence in these statistics, and the results for people who are wise enough to understand them - and take them seriously.


Otis
« Last Edit: October 16, 2013, 06:59:48 AM by OtisBrown »

Offline Torvald

  • Newbie
  • *
  • Posts: 29
Re: Statistics, Science and Facts.
« Reply #2 on: November 06, 2013, 07:14:16 PM »
In statistics, a result of 0.05 is called, "significant".  A result of 0.01 is called, "highly significant".  I compiled the results for each year of this "plus study".  They show results of 0.00001 to 0.00000001 for the effect of a plus on the natural eye. For the "technical mind" of an engineer, or researcher, I would recommend down-loading "Vis_Young9" and review the data.  Here is the blog:

http://myopiafree.wordpress.com/study/

Otis, I've got two thoughts to offer about this. I'll split them into separate messages.

First, you asked in another thread for comments on the statistics (I can't find the message right now, though), so here goes. Let me preface this by saying that I have never used statistics for anything real: my entire exposure is limited to homework problems, so I am not an expert.

When people say that a result with a p-value of less than 0.05 is "significant", I think they are usually misusing statistics. Neither your web page cited above nor http://www.myopia.org/bifocaltable4.htm explain how the "level of confidence" was calculated, but the all-too-common procedure is run the "Student's t-test" where the null hypothesis is that the two samples were generated by random variables with exactly the same mean, assuming that both variables are normally distributed and have equal variances. With this test, the p-value is the probability that both samples could have been generated by normally distributed variables with exactly the same mean and variance. The usual procedure is to say, when p < 0.05, that your data rejects the null hypothesis.

There are two problems with that:

1. That null hypothesis is very unlikely in almost any real-world situation where you are going to the trouble of collecting data. In the case of the bi-focal lenses, the null hypothesis is that the lenses have exactly zero effect. Without even gathering data, such a hypothesis already seems astoundingly unlikely. It's so unlikely, there's little point in refuting it.

2. The p-value reflects your sample size as much as it reflects the real phenomenon being studied. Consequently, when people call a result "significant" because they got p < 0.05, they are conflating the statistical power of their experiment with the domain being studied. The following might make this problem clearer: if you don't want to reject the null hypothesis, then you just use a small sample; if you want to reject the null hypothesis, you just use a large-enough sample to get the p-value below 0.05. Since the null hypothesis is pretty much assured to be false, there is some sample size above which you have a good chance of getting p < 0.05. If you don't get p < 0.05, well, run your experiment again until you do, and presto, you have a "significant" result.

A third problem, which is less severe, is that Student's t-test depends on both random variables being normally distributed and having equal variance. I understand that in practice, this doesn't skew most results too badly, but good practice dictates at least having a quick look at the data to see if these assumptions are true. If they're wildly off, there are other statistical tests, like Welch's t-test, which does not assume equal variance (and consequently assigns higher p-values to the same data). A good journal paper should mention what you did to verify that the assumptions of your statistical test are true. (I've almost never seen this in journal papers, though.)

I hope the above has caused you to doubt the reasonableness of calling the results of Student's t-test "significant" if they reject the null hypothesis with p < 0.05. Calling results "significant" because p < 0.05 in order to prove that they are "significant" in the ordinary meaning of the word is the fallacy of equivocation—passing off one thing as another simply because the same word is used for both.

Fortunately, there is a way to demonstrate real significance—significance that has to do only with eyesight and nothing to do with sample size. That is to use 95%-confidence intervals: calculate the lowest and highest difference between the means of each sample that have p < 0.05. Then your conclusion is something like "mean eyesight improvement is in the range 0.007 to 0.140 diopters, with 95% confidence". Computed this way, your sample size affects only how much you can narrow down the likely range of mean improvement. The amount of improvement is the matter of interest. That's what a person weighing the benefits against the bother of wearing plus lenses would like to know.

It looks like you're already thinking this way:

While I have mentioned "down rates" of -0.66 to -0.5 diopters per year, I don't think anyone believes me in that statement.  I don't know why that is so.  But that is the true issue for a person to face - who wishes to get out of -1.0 to -1.5 diopters, when entering a four year college.

As I said above, I am not an expert, and the above could all be wrong. If you find a mistake, I would be grateful if you'd explain it; others may be enlightened as well.


Offline OtisBrown

  • Hero Member
  • *****
  • Posts: 1734
Re: Statistics, Science and Facts.
« Reply #3 on: November 06, 2013, 08:22:26 PM »
Hi Torvald,

Subject: I will argue with WHO defines "real world".  For me, science of the natural eye, as a dynamic system - is part of the "real world" - if you will allow it to exist.

If a true preventive study, were ever to be conducted, I would EXPECT that each person would understand statistics.  The data I used came from Francis Young' study - but is was VERY HARD to follow.  Further, he did not calculate the critical SIGNIFICANCE LEVEL.  That is important, (for a wise person) because it truly does show the effect that a plus could have - if used with great wisdom.  But let me respond to this statement:

1. That null hypothesis is very unlikely in almost any real-world situation where you are going to the trouble of collecting data. In the case of the bi-focal lenses, the null hypothesis is that the lenses have exactly zero effect. Without even gathering data, such a hypothesis already seems astoundingly unlikely. It's so unlikely, there's little point in refuting it.

I thank you for listing this.  It is repeatedly stated by "medical people" that, 1) A change in visual environment has NO EFFECT on the refractive state of the natural eye. and therefore, 2) A lens has NO EFFECT on the refractive state of the natural eye.  That is the NULL HYPOTHESIS.  It is necessary to either PROVE or DISPROVE it for the fundamental eye.  I know that "medical people have NO  INTEREST in this type of science.

It is sad, but part of the issue is to prove the ALTERNATIVE HYPOTHESIS, that a lens does has a profound effect on the natural eye - and if the person has the resolve to wear it correctly - he could SLOWLY get out of -1.0 diopters - IF HE WISES TO BE INTELLECTUALLY PART OF HIS OWN STUDY.

The "research hypothesis", makes this a "one tail" study. The need for an educated engineer, who understand THE GOAL OF HIS OWN STUDY, would be crucial for the success of the study.  I see on my site that you looked at the "Bifocal" study.  But have you looked at my further "analysis" (better organized with the actual "p" values calculated.

I would be GLAD, to volunteer for this type of scientific study - in EITHER THE TEST OR CONTROL GROUP - provide I personally make the refractive status measurements myself.

I know you can never do this with children - that that type of study is impossible. But for an educated engineer, who understands the purpose of his own study - I think you can get results that are better than the bifocal study.

For those interested, here is the study - and further discussion for a "determined scientific person".

http://myopiafree.wordpress.com/study/

I will try to capture you commentary - and post in on this thread - for later discussion.

Thanks!



Offline Torvald

  • Newbie
  • *
  • Posts: 29
Re: Statistics, Science and Facts.
« Reply #4 on: November 06, 2013, 08:23:59 PM »
I am great confidence in these statistics, and the results for people who are wise enough to understand them - and take them seriously.

Here's the other thought: statistics are not persuasive.

There are two reasons for this. One, I'll call "reasonable" and the other "unreasonable".

The reasonable reason for being unconvinced by statistics is that statistics are extremely error-prone. Here are a few illustrations. Remember the exit polls in the U.S. 2004 elections? Professional pollsters "goofed" (or so they say). Remember how confident Romney's pollsters were about a victory in 2012? You know how one year, "scientific studies" show that nutrient X helps prevent disease Y, and a couple years later, "scientific studies" show that nutrient X increases your chance of getting disease Y? These mistakes happen because bias creeps into statistics in a huge number of ways, which are extremely hard to catch, even by professionals.

Another problem is that statistics (technically, "parameters"—the things that you infer from statistics) try to describe very large numbers of things (people using a medical treatment, people earning money, people using plus lenses, etc.) using only a few numbers, like mean and variance. This means that you lose vast amounts of information whenever you use statistics. For parameters to be meaningful, you have to assume that the population they describe follows a certain "distribution" that is mathematically easy to work with. The most famous distribution is the "normal distribution" or "bell curve". For many populations in nature, the normal distribution is a good-enough approximation, but for many, it's not. In many populations, there are multiple trends, subgroups that behave differently from a main group, factors causing the distribution to be skewed one way or the other, etc. (Examples provided on request.)

Have you ever tried to cook a recipe and had it come out terrible, even though you thought you did everything just right? After some experimentation, maybe you find some tricky little thing that the recipe didn't spell out explicitly or that you misunderstood, which makes all the difference. The same thing happens all the time in engineering and science. Lots of tiny little factors make a big difference. In polling, tiny, apparently insignificant variations in the wording of a question in a survey can swing the results between opposite conclusions (with p < 0.01). Statistics don't normally talk about this kind of stuff, because it's very hard to talk about and research. The complexity of causal factors in any practical matter makes many statistical results, even valid ones, inapplicable in practice.

Another problem is that using statistical methods correctly is just plain tricky, even for people who are good at math. For example, the statistical fallacy described in the previous message is extremely common. Statisticians loudly complain about that fallacy, but professional scientists make it all the time in journal papers, and the papers pass peer review.

The number of ways that statistical results can go wrong goes far beyond what I've said here. Statisticians have looked into misuse of statistics and found problems of amazing severity and frequency. This article in The Atlantic provides a nice overview of the work of John Ioannidis, a statistician who looked into statistical errors in leading medical journals. "Of the 49 [most highly regarded medical articles published in the last 13 years], 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated."

Sad to say, when someone tells me that they have statistics that "show" some "result", I usually ignore them, and I think I'm right to do so. I view statistics as sometimes an important supplement to other reasoning, but not reasonably persuasive by themselves. When people trumpet statistics as "proving" something without bias and with greater certainty than can be achieved by common-sense reasoning, I hear that as an attempt to bully me into a conclusion that the speaker knows full well is not justified. When people have a reasonable case to make for a conclusion, they just make it; they don't have to resort to that kind of tactic.

For the unreasonable reason why people are unpersuaded by statistics (and by reasonable arguments of all kinds), I refer you to the Asch conformity experiments, which proved statistically that 75% of humanity just follow the rest of their tribe and don't think for themselves. Just kidding, of course, but there is certainly a grain of truth in that. It looks like you're onto this, too:

I know that few people actually LOOK at facts and science.

The real lesson here, for anyone who wants to spread a genuinely good idea, is that persuasion is itself a job that requires thought, dedication, and creativity, no less than coming up with or recognizing the good idea in the first place. The fact that an idea is good does not by itself give it much strength in the marketplace of ideas. Statistical support might help and might hurt, depending on how you use it.
« Last Edit: November 06, 2013, 08:35:48 PM by Torvald »

Offline OtisBrown

  • Hero Member
  • *****
  • Posts: 1734
Re: Statistics, Science and Facts.
« Reply #5 on: November 06, 2013, 08:31:39 PM »

Subject: Statistics and science - are not persuasive.

Item: I agree with you, for 99.9 percent of the general public. This is why this subject is not medicine, and never will be.


I always acknowledge this: Statistics can never solve a problem. Only the human mind can solve the problem of the natural eye with self-measured negative status – of about -1.0 diopters.
 BUT:
 Statistics can do this for all of you – if you understand them, and measure your refractive state yourself (as a group of scientists).
 They can demonstrate, “early success”, in that the group that is totally committed to wearing the plus – can get out of it (change their status from -1.0 diopters to +0.5 diopters.)
 The implication of Francis Young’s study – is exactly that way. The value of his study was the establishment of the REASONABLE standard-deviation (Sigma) for a large group of people. If the person is the “intellectual leader” in his own study (either test or control group) for eight months BOTH GROUPS WILL SUCCEED. But the must fully understand the conditions and difficulties of Dr. Young’s seminal study of the natural eye.

NO STUDY HAS EVER BEEN ATTEMPTED – WITH EDUCATED PEOPLE WHO WILL MAKE THE REQUIRED MEASUREMENTS.

In Dr. Young’s relied ONLY CHILDREN with NO INSTRUCTSIONS on how to use, or correctly-wear the plus lens. That is truly understandable, if you deal with a child in an “office”. The result can never be truly effective if the child “leans forward” by several inches, effectively cancelling the desired effect of the plus lens.

We must get ourselves out of the “confines” of that office. I would ask the that the people in that “office”, acknowledge these problems. Extend a friendly and helpful hand to all of us in this effort. Even though it can drastically change the man’s practice, “in his office”. This is for the betterment of ALL OF US.

If the person develops growing, educated wisdom, and is prepared to examine a study that shows (statistically) that the plus had a strong effect on the refractive state of the natural eye (p < 10E-6) for all groups – I would believe that a person fully "invested" in his own study, would show the required change in refractive state from -1.0 diopters to +0.5 diopters in about eight months.

Today, that type of scientific, informed study, is effectively blocked by the "powers that be". I can not change that fact. I wish I could.

Offline OtisBrown

  • Hero Member
  • *****
  • Posts: 1734
Re: Statistics, Science and Facts.
« Reply #6 on: November 06, 2013, 08:38:46 PM »
Hi Torvald,

I am totally "aware" of your proposed problems with "statistics".  If they convince you to do nothing - then they served their purpose.

I was interested in  helping people like Todd, and Brian Severson SLOWY get out of about -1.0 diopters.  I know that 99.9 percent of the public, will NEVER has any interest in prevention.

I know you can not get a child to "sit up" (which is why a child get into it) and in an "office" the minus is the only thing that works.

So children are beyond the "pale" at this point.

My interest was in helping my sister's children truly understand their choice in this matter - of avoiding "the office", and to monitor their own "distant vision", to always wear the plus (for all close work) and avoid that -0.66 to -0.5 diopter per year - that they would develop if they did not discipline themselves to wear the plus intelligently.

But I am curious, Torvald, do you believe that -.5 diopter per year or is it a statistical falsehood?

Thanks for your reply

Offline Torvald

  • Newbie
  • *
  • Posts: 29
Re: Statistics, Science and Facts.
« Reply #7 on: November 06, 2013, 09:01:35 PM »
It is repeatedly stated by "medical people" that, 1) A change in visual environment has NO EFFECT on the refractive state of the natural eye. and therefore, 2) A lens has NO EFFECT on the refractive state of the natural eye.

That is news to me. Thanks for pointing this out. This means that the effects of plus lenses on myopics should be very big news indeed, but they also face an even more-stringent "meme filter" for gaining acceptance in the wider community of eye doctors.

BTW, I've been told by eye doctors all my life that I need to wear my glasses all the time in order to prevent worsening of my vision. My sample might be skewed, but my sample of eye doctors doesn't think that lenses have no effect. Or maybe they were intentionally skewing the facts (as they understand them) in order to nag me to wear my glasses.

Not that it matters. I hope that you will give some thought to what I said about conflating facts about the statistical power of an experiment with facts about the domain being studied, about the unreliability of statistics, and about the peculiar challenges of persuading people to adopt good ideas. I'd be interested to hear your thoughts on any of those.

Offline Torvald

  • Newbie
  • *
  • Posts: 29
Re: Statistics, Science and Facts.
« Reply #8 on: November 06, 2013, 10:05:01 PM »
But I am curious, Torvald, do you believe that -.5 diopter per year or is it a statistical falsehood?

I don't have any strong opinion. It takes a lot of time and effort to check out a statistical result, and all I've done is look over this table for a few minutes. I really don't have a right to a strong opinion here.

The idea that plus lenses can help reverse myopia sounds very plausible to me, for all sorts of reasons (presented all over this web site), but not because of any statistics I've read. I'd like to try it, but I'm holding off for now because I've got too many other things on my plate. When I do try it, I'll give it the care and attention needed to do it right. (See my paragraph about "recipes" a couple messages back.)

The study looks like it might provide very good evidence for how much improvement myopics in general might get from plus lenses. I haven't really made sense of the statistics, though. Here are a couple questions that come to mind:

(1) What range of worsening or improvements happened? (I assume you don't mean that everyone in the control group got exactly 0.5 diopters worse each year.)

(2) Were the results normally distributed, or were there different subpopulations that responded differently to the same treatment, or some skewing of the distribution?

Are these questions answered somewhere?

Regarding (1), I see in this table that the standard deviations seem a little large. For example, at age 13, the bifocal group had a mean change in Rx of –0.06 diopters but standard deviations of 0.14 and 0.15 (depending on which eye). I have zero experience with both optometry and statistics, and I haven't read the accompanying article carefully, so I don't feel confident to judge the significance of these numbers.

Regarding (2), I just noticed something interesting in the table. At age 17, the bifocal group had a mean improvement of 0.23 diopters in the right eye but a median of 0.07. I think this usually results from a small number of subjects having a very big improvement while most had a small improvement or got worse. Do you know what happened here? The standard deviation is 0.43, so apparently one or two subjects got about half a diopter better while most of them got a little worse. I could be wrong, though.

A possible clue: the control group's rate of worsening decreased quite a lot at age 17: down to a mean of –0.24 or –0.28. Do people's eyes normally stabilize at around age 17?

Offline OtisBrown

  • Hero Member
  • *****
  • Posts: 1734
Re: Statistics, Science and Facts.
« Reply #9 on: November 06, 2013, 10:06:23 PM »
Hi Torvald,

I very specifically will never say these several words:  Cure,  Therapy, Easy, Fast, Recovery from ALL CONDITIONS.
I very must respect a man, sitting in an office with his minus lens.  Yes I know he wants to "please" you with a strong minus lens.  There is no point in me "arguing" about that issue.  What I say, about measured refractive STATES of the eye - is not medicine.  If the person is thinking "cure" when I say that, he should "erase" his mind, and try again - this time to look at objective facts, and yes, statistics.  You can NEVER "prescribe prevention".

Further, no one in medicine has ANY RESPONSIBLITY for "any prevention".  Why should you bother them with that issue?

But I like your response:


Torvald>  That is news to me. Thanks for pointing this out. This means that the effects of plus lenses on myopics should be very big news indeed, but they also face an even more-stringent "meme filter" for gaining acceptance in the wider community of eye doctors.

Otis> I MUST be very specific.  I say PERSONALLY measured negative status, as in you have a Snellen  up, and read 20/40. You, with education, hold up a -3/4 diopter lens, and read 20/20 through that lens.  But  you are smart enough to know the "statistics", that any full-time wearing of any minus, does not solve any problem, only makes it worse.  (That -1/2 diopter per year - please check).

Torvald>  BTW, I've been told by eye doctors all my life that I need to wear my glasses all the time in order to prevent worsening of my vision.

Otis> That is massively the problem.  This is part of 1) Environment has no effect, 2) Minus has no effect.  This has NEVER been science.  Medical people MEMORIZE THEIR 'FACTS'. Then they never check.  Then they repeat them, and expect you to believe  them. That is why I check them - and you should also. They truly can not help you.

Torvald>  My sample might be skewed, but my sample of eye doctors doesn't think that lenses have no effect.

Otis> That is a double negative.  Better to say, my doctor informs me, that a minus lens has no effect on the refractive status of all eyes.  (But they just do not care - and they are trying to avoid the implication of you question.  You are not considered intelligent enough to understand a scientific question and its answer.

Torvald>  Or maybe they were intentionally skewing the facts (as they understand them) in order to nag me to wear my glasses.


Otis> It was NEVER their job to prevent a negative state - for the natural eye.  Why do you ask them for anything? They are "busy" in their office, and must make you "happy" in about 10 minutes.  I do not blame them.

Otis>  This is not about "nagging".  It is about the wise person, who needs to get back to 20/20, from 20/40, and always avoid entry.  I am very specific about this issue.  If a person can not read the 20/50 line, I discourage him from working on plus-prevention.  The reason is the statistics I provided to you.

Offline OtisBrown

  • Hero Member
  • *****
  • Posts: 1734
Re: Statistics, Science and Facts.
« Reply #10 on: November 07, 2013, 10:57:47 AM »
Hi Torvald,

Subject: Further clarification on level-of-significance.

Otis, when I required statistics, I say the it is truly necessary for science.  You seem to disagree with that concept.  But let me suggest that a more complete review is required, and you need to do the review of the calculated "Significance".


Torvald>  When people say that a result with a p-value of less than 0.05 is "significant", I think they are usually misusing statistics.

Otis> That is your opinion, but there is no justification for it.  Further, I look for "highly significant", and I expect far better than that.  If the result was "just significant" - I would ignore the study.

Torvald> Neither your web page cited above nor http://www.myopia.org/bifocaltable4.htm explain how the "level of confidence" was calculated,

Otis> You are mistaken.  If you down-load the spread sheet, (Age 6 to 11, and Age 12 to 18) you will find the "Level of Significance" calculated.  The bifocal data is the SOUCE of the calculations - but the calculations WERE NOT COMPLETED. 

Torvald>  but the all-too-common procedure is run the "Student's t-test" where the null hypothesis is that the two samples were generated by random variables with exactly the same mean, assuming that both variables are normally distributed and have equal variances.

Otis> I have discussed that issue for the people who would be in the study.  I would invite you to down-load those spread-sheets, and look that the calculations yourself.

Otis>  In the final analysis, I ask myself would I want to be in a (preventive) study, where I KNEW statistics, and could be in either the "minus group" or the "plus group".  I would be pleased to be in either group, even if it meant that I would go from 20/40 (-1.0 diopters) to 20/70 (-1.75 diopters), always self-measured.

Otis> That is the real test of engineering competence.  Go back an down-load the spread sheets.

Otis> I will add commentary by an ophthalmologist - shortly.

Offline OtisBrown

  • Hero Member
  • *****
  • Posts: 1734
Re: Statistics, Science and Facts.
« Reply #11 on: November 07, 2013, 11:02:14 AM »
Hi Torvald,

Subject: Are there optometrists who insist that ANY prevention will always be impossible - correct as pure science.  That is THEIR null-hypothesis.  I hardly say, "do not trust them", but I would like to  see proof that they are correct.


Yes, of course.  But they are judging YOU, and your lack of motivation to take on the responsibility of true prevention (from 20/40 and -1.0 diopters.)

Then there are some brave ophthalmologists who SUGGEST that some recovery is possible, as per:

http://frauenfeldclinic.com/reversing-course-myopia-one-month/

So to suggest there is "scientific proof" for any of this, is just "not right". That is why I rely on science and statistics to determine what is "reasonably possible", in the real-world of science and statistics.

Enjoy,
« Last Edit: November 07, 2013, 11:24:20 AM by OtisBrown »

Offline OtisBrown

  • Hero Member
  • *****
  • Posts: 1734
Re: Statistics, Science and Facts.
« Reply #12 on: November 07, 2013, 05:19:07 PM »
Statistics 101:

Confidence interval - one way to present the data:

http://www.surveysystem.com/sscalc.htm

Significance level, or evaluation:


http://www.stats.gla.ac.uk/steps/glossary/hypothesis_testing.html

The term, "Confidence Level" was not correct in the Francis Young study.  What was calculated was the SIGNIFICANCE LEVEL.  I have re-calculated this level in my analysis.

Thanks,


In this study by Francis Young, "Significance", or "p value", as listed is always less than 0.001.

In fact it was 0.000001

To suggest that this study is not "significant", over the five years - is not even reasonable.

Thanks for your interest,

http://www.stats.gla.ac.uk/steps/glossary/hypothesis_testing.html#sl
« Last Edit: November 07, 2013, 07:13:01 PM by OtisBrown »

Offline OtisBrown

  • Hero Member
  • *****
  • Posts: 1734
Re: Statistics, Science and Facts.
« Reply #13 on: November 08, 2013, 08:03:07 AM »
Hi Torvald,

Subject: With "real world" are we talking about?

I will never "cross"  an optometrist IN HIS OFFICE.  In fact, I suggest you listen to this "real world" described by a nice person, and an optometrist.

http://www.youtube.com/watch?v=kephdEyEKxU#t=40

Yes, Torvald, I truly "understand" that real world, where you have 20/40, and she can hold up a -1.0 diopter and show your that it makes 20/20 for you.  I NEVER argue with that "real-world" - if that is what you are talking about.

But I believe there is a "scientific - statistical" world, that you insist is NOT the "real world". 

This is why I put up my own Snellen, and read it objectively.  If at 20/40, a -1.0 diopter will show me that I have a natural and normal negative status.  Since I realize with my knowledge and wisdom, this concept can not be "fit into" the optometrist's "real world", I have no choice but to take care of my distant vision myself.

There are the TWO "real worlds" we are talking about.  One is objective and scientific, the "optometrists' world, has no relationship to the a "scientific, statistical", real world.

Thanks,


I am great confidence in these statistics, and the results for people who are wise enough to understand them - and take them seriously.

Here's the other thought: statistics are not persuasive.

There are two reasons for this. One, I'll call "reasonable" and the other "unreasonable".

The reasonable reason for being unconvinced by statistics is that statistics are extremely error-prone. Here are a few illustrations. Remember the exit polls in the U.S. 2004 elections? Professional pollsters "goofed" (or so they say). Remember how confident Romney's pollsters were about a victory in 2012? You know how one year, "scientific studies" show that nutrient X helps prevent disease Y, and a couple years later, "scientific studies" show that nutrient X increases your chance of getting disease Y? These mistakes happen because bias creeps into statistics in a huge number of ways, which are extremely hard to catch, even by professionals.

Another problem is that statistics (technically, "parameters"—the things that you infer from statistics) try to describe very large numbers of things (people using a medical treatment, people earning money, people using plus lenses, etc.) using only a few numbers, like mean and variance. This means that you lose vast amounts of information whenever you use statistics. For parameters to be meaningful, you have to assume that the population they describe follows a certain "distribution" that is mathematically easy to work with. The most famous distribution is the "normal distribution" or "bell curve". For many populations in nature, the normal distribution is a good-enough approximation, but for many, it's not. In many populations, there are multiple trends, subgroups that behave differently from a main group, factors causing the distribution to be skewed one way or the other, etc. (Examples provided on request.)

Have you ever tried to cook a recipe and had it come out terrible, even though you thought you did everything just right? After some experimentation, maybe you find some tricky little thing that the recipe didn't spell out explicitly or that you misunderstood, which makes all the difference. The same thing happens all the time in engineering and science. Lots of tiny little factors make a big difference. In polling, tiny, apparently insignificant variations in the wording of a question in a survey can swing the results between opposite conclusions (with p < 0.01). Statistics don't normally talk about this kind of stuff, because it's very hard to talk about and research. The complexity of causal factors in any practical matter makes many statistical results, even valid ones, inapplicable in practice.

Another problem is that using statistical methods correctly is just plain tricky, even for people who are good at math. For example, the statistical fallacy described in the previous message is extremely common. Statisticians loudly complain about that fallacy, but professional scientists make it all the time in journal papers, and the papers pass peer review.

The number of ways that statistical results can go wrong goes far beyond what I've said here. Statisticians have looked into misuse of statistics and found problems of amazing severity and frequency. This article in The Atlantic provides a nice overview of the work of John Ioannidis, a statistician who looked into statistical errors in leading medical journals. "Of the 49 [most highly regarded medical articles published in the last 13 years], 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated."

Sad to say, when someone tells me that they have statistics that "show" some "result", I usually ignore them, and I think I'm right to do so. I view statistics as sometimes an important supplement to other reasoning, but not reasonably persuasive by themselves. When people trumpet statistics as "proving" something without bias and with greater certainty than can be achieved by common-sense reasoning, I hear that as an attempt to bully me into a conclusion that the speaker knows full well is not justified. When people have a reasonable case to make for a conclusion, they just make it; they don't have to resort to that kind of tactic.

For the unreasonable reason why people are unpersuaded by statistics (and by reasonable arguments of all kinds), I refer you to the Asch conformity experiments, which proved statistically that 75% of humanity just follow the rest of their tribe and don't think for themselves. Just kidding, of course, but there is certainly a grain of truth in that. It looks like you're onto this, too:

I know that few people actually LOOK at facts and science.

The real lesson here, for anyone who wants to spread a genuinely good idea, is that persuasion is itself a job that requires thought, dedication, and creativity, no less than coming up with or recognizing the good idea in the first place. The fact that an idea is good does not by itself give it much strength in the marketplace of ideas. Statistical support might help and might hurt, depending on how you use it.
« Last Edit: November 08, 2013, 11:28:31 AM by OtisBrown »

Offline Torvald

  • Newbie
  • *
  • Posts: 29
Re: Statistics, Science and Facts.
« Reply #14 on: November 09, 2013, 08:55:15 PM »
Otis,

There appears to be some miscommunication between us.

To suggest that [Francis Young's] study is not "significant", over the five years - is not even reasonable.

I did not suggest that. I said almost the exact opposite.

Torvald>  When people say that a result with a p-value of less than 0.05 is "significant", I think they are usually misusing statistics.

Otis> That is your opinion, but there is no justification for it.

The justification was the rest of the message you quoted from.

Torvald> Neither your web page cited above nor http://www.myopia.org/bifocaltable4.htm explain how the "level of confidence" was calculated,

Otis> You are mistaken.  If you down-load the spread sheet, (Age 6 to 11, and Age 12 to 18) you will find the "Level of Significance" calculated.

Thanks for the correction. I'll have to take your word for it, since I don't know what you're referring to.

But I believe there is a "scientific - statistical" world, that you insist is NOT the "real world". 

I didn't insist on that, and I don't think that.

If you have something to say about my point that significance in the ordinary sense should not be confused with statistical significance, I'd like to read it.