The headline at the Social Weather Stations website announcing the results of its Third Quarter National Survey is not supported by the statistical data collected in the survey:
NET SATISFACTION WITH PGMA INCHES UP TO -11
As the SWS article points out, GMA's Net Satisfaction Rating (NSR) was calculated to be -13% in the Second Quarter. Now it is calculated to be -11%. The change in the NSR is therefore plus 2 percentage points, correctly rounded.
So did GMA's NSR "inch up"?
NO! It was UNCHANGED.
Why? Because the standard Margin of Error in the RAW STATISTICS of each SWS Survey, as SWS well knows, is about plus or minus 3% at the 95% confidence level or two standard deviation confidence interval. On this account alone, the headline is utterly false and misleading already. But it's actually worse than that. As a COMPUTED STATISTIC, being the difference between two separate raw statistics -- those who are satisfied with GMA minus those dissatisfied -- the Net Satisfaction Rating actually has a statistical Margin of Error twice as big, that is, plus or minus 6%!
Now if you take the Third Quarter NSR and subtract from it the Second Quarter NSR, and then observe that the result if plus 2%, the technically correct way to report this is that the CHANGE in the NSR was plus 2% PLUS OR MINUS 12%, since each of the NSRs themselves contain a statistical Margin of Error of plus or minus 6%.
So let me summarize the important points to remember as an intelligent reader of public opinion surveys:
(1) Any statistic that comes directly from a tally of the responses to a question on the survey questionnaire has a built in Margin of Error of plus or minus 3% (actually plus or minus 2.89%, which is the number you get when you compute the reciprocal of the square root of 1200.)
(2) Any statistic that is computed from any two statistics that fall in category (1) above, such as the Net Satisfaction Rating which is a difference of two such statistics, will have a built in Margin of Error of plus or minus 6%.
(3) The CHANGE from one survey to another of a computed statistic like those in category (2) above, will have a built in Margin of Error of plus or minus 12%.
One thing is clear, the so-called Net Satisfaction Rating is a REALLY LOUSY STATISTIC when your random sample is only 1200 respondents.
No wonder they say that Statistics is the Science of proving any damn thing you want to!
The headline should have read:
GMA's Net Satisfaction Rating Was Unchanged
Any apparent arithmetic change is NOT statistically significant. Not by a LONG SHOT! Thus the use of this headline is just as UNETHICAL as if a broadsheet were to publish some bald lie. Except this isn't so bald, i.e., obvious, which is what makes it despicable to me: the misuse of science and mathematics.
RELATED POSTS:
Public Opinion Polling Is a Genre of Journalism
How the Surveys Have Lost Their Sting
Fallacy of the Leading Question
Checking If A Coin Is Fair
14 comments:
MB,
Most of us never ask the right questions. So you are spot on. The NSR was really invented i think because there is always that unresponsive lot. If you look at a graph of NSR for any given President, it swings wildly because it is a composite of two statistically varying quantities, and that variation has a lot to do with those unresponsives. Notice they have created a difference between two statistics because the sum of the statistics is really an embarrassing hole that show those unresponsives and how many they are.
I think in fact that the percentage of unresponsives is the TRUE MARGIN of error in the NSR. Hahaha plus or minus 18 percent. It's really quite a useless number except in the most extraordinary of circumstances.
Easier to ask your barber, jeepney driver or other "typical" friend. Same MOE. hehe
The rounding is just because of the fact that the margin of error even in the raw statistics are already plus or minus 3% anyway, so quoting statitics to TENTHS of a percent really make no sense.
DJB, like MB I want to ask some silly questions. If there is a net satisfaction rating would there be a gross satisfaction rating? Would the gsr be 29 percent? Because if you minus 18 percent from 29 you get 11 nsr.
But then the total percentage would become 113 and not 100 percent. Which means that there really is no way to compute net satisfaction because of the 18 percent that did not respond?
I say add the 18 percent to the 48 and then minus it with 37 percent to get the gross satisfaction rating and then from this get the net satisfaction rating by deducting the 18. Which of course of is 11 percent plus or minus 9 percent.
But you are right DJB 'coz math and statistics is definitely not my best subject.
BFR,
The best way to think about it is this. The so called Margin of Error MOE (in this case 3%) is actually the smallest change or difference one can measure.
It's like if I ask you "how many children do you have, BFR?"
It would me nonsense for you to say something like "I have 2.5 children."
Likewise, when we talk about statistics collected from 1200 respondents, if I ask, what percentage are satisfied with GMA?
If the answer is 2.5 MOE's it would be just as nonsensical.
In other words "3%" is actually the unit of measure and there is nothing smaller! Just like the number of your children cannot sensibly change by one half or one third of a child, neither can the satisfaction rating increase or decrease by a number smaller than 3%!
That is the meaning of the MOE. It is the reason there is no FREE LUNCH in statistics. It is the price we pay for NOT asking 100% of the people what their opinion is, but are limited to 1200 for practical purposes.
Always keep in mind also that the SWS asks a DIFFERENT set of 1200 people every time it conducts a survey, even if it asks the same question. That is the other reason we cannot possibly believe in any change or difference smaller than the MOE.
I hope this was a more NON MATHEMATICAL explanation BFR.
BTW: the only silly questions we ask are the one we DON'T ask!
Thanks for this.
BFR,
Actually the NET SATISFACTION RATING is computed by the SWS by subtracting the GROSS DISSATISFACTION RATING from what you are calling the GROSS SATISFACTION RATING.
You see, the SWS asks two separate questions regarding satisfaction with the President: (1) Are you satisfied with the President? (37%) and (2) Are you dissatisfied with the President. (48%)--These are the two GROSS ratings mentioned above.
But notice that 37%+48%=85%
The undecided on these two questions is always in double digits.
So the SWS and other pollsters INVENTED the Net Satisfaction Rating by taking the difference between the two answers. They have never disclosed to the public that the M.O.E. in the NSR is TWICE what it is in all the other statistics like your Gross Satisfaction Rating.
DJB,
The population in the Phil is about 85 million. About 48 million, those who have a real say in Gloria's performance, are registered voters.
If I understand correctly, SWS's random sample is only 1200. We don't even know if this sample is representative of the whole country. With this miniscule sample, wouldn't you agree that the survey results, wether gma's approval rating changed or not, is meaningless?
DJB, if next time the survey is published, the net satisfaction of GMA, goes up to -9 (from today's -11) - would that mean it is still unchanged? What if every succeeding survey, the figure moves up by +2 until such time that the figure becomes +11. Will it mean that the net satisfaction was unchanged from survey to survey, but across all surveys there was a significant change?
To clarify the first sentence, what if the net satisfaction rating during next survey rises two points to 'negative nine' (from today's 'negative eleven')...
cvj, I know what you're getting at but first let me re-emphasize a point I've been making about NSR. It's Margin of Error is not +/-3% by +/-6%. So let me rephrase your question. What if GMA's NSR goes up by 3% this quarter and 3% again in the Fourth Quarter, making a +6% total change over 2 quarters. Then indeed we can say that something statistically significant changed in the NSR between the Second and Fourth Quarters. But we would still not be able to conclude anything definite about what happened in the Second Quarter.
ON a more general level, remember that a public opinion poll is a MEASUREMENT, much like a temperature or pressure measurement.
But the PRECISION with which we can make that measurement is strictly and logically limited by the SIZE of our random sample.
Thus statistical measurements are often called ESTIMATES, because they always have a built in IMPRECISION to any result, which is the price we pay for being able to say many significant things about 50 million voters from SCIENTIFICALLY interpreting what just 1200 people say.
It is really a marvelous science...I hate to see it fall into the hands of the propagandists and the naifs.
Some of the most beautiful mathematics comes from that branch of statistics and number theory called COMBINATORICS.
It makes me so mad that such power has fallen into the hands of evil people.
Like great art or sculpture owned and collected by those whose real tastes run closer to kitsch or political camp.
The Third Quarter headline should have said NSR of GMA was statistically indistinguishable from the Second Quarter result.
cvj,
The Palace was really working hard with SWS, PDI and ABSCBN to get this lil LIE into the headline, that the NSR had changed for the better, even just by an inch. We have no data to justify such a headline.
But the historical records of the SWS are the most powerful data for judging overall trends and patterns. There you can see that 2% change in NSR is MINISCULE change as most quarter to quarter changes go.
I've looked at years worth of NSRs for various Presidents. I tell ya, it's a lousy statistic, jumps all over the place from one quarter to the next.
It's because at +/-6% (1200 respondents) it has a 12% wide footprint. Hard to tell where it's stepping in any given quarter.
DJB, thanks for your clarification. i agree that an honest headline would have clearly stated that the results were within the margin of error and that no conclusion can be drawn either way. As you have stated above, if anything, they should have compared the outcome with a previous period where the change has been statistically significant.
Even given a statistically significant result however, i don't think we can derive a conclusion based on any upward or downward movement in a straightforward manner. These could, after all, just be random bounces or dips similar to fluctuations in the stock market as it takes in and reacts to information from different sources.
I believe you're right to criticize the NSR as this manner of presentation masks the existence of the portion that is unknown.
Folks:
Here's the thought that will seal the moral of this story...
I just realized that Margin of ERROR is a totally terrible name. It should really be Margin of Correctness, though that sounds awful.
But you see what the math literally means is that in the 2nd Quarter, when the announced NSR was -13%, it really could just as easily have been, for measurement purposes and due purely to random sampling effects, as high as -10% or as low as -16%.
In other words, SWS could just as easily have seen the -11% number last time around, while still including the actual -13% number within the Margin of Correctness, err, Error.
Now all this may seem like pecayunish detail to some, BUT WAIT TILL SWS IS USED FOR CHACHA!
Citizens, Pundits, Bloggers, better bone up on Stat 101!
The “Officers and Full-Time Staffs” shown in the SWS website include a list of names under the heading of “Field” (8), “Field Anchors” (5) and a “Project Anchor/Auditor” (1).
Are these “staffs” the “field workers” who actually conduct the survey of the usual 1,200 respondents? If not, is the actual SWS survey work “outsourced”?
I think the problem on surveys is not so much about how the question is phrased or the manner in which the question is asked, not even is it about how the answers are analyzed; rather, it’s about the presumed integrity or plain honesty of the survey takers tasked to record the reply of each respondent in the “field.”
Is the reply of the respondent audio or video taped? Does the respondent get to see and be allowed to verify or certify (initials) if his/her response to the question posed (say, “yes” or “no”) is actually what is recorded by the SWS (or "outsourced") survey taker?
Dean, Gosh the way you explain it, the stats seem easy enough to understand.
You a teacher on the side?
Post a Comment