ver the years, public opinion polling in the Philippines has acquired a generally good reputation. More than any other, the Social Weather Stations has earned the trust of the public, with Pulse Asia Inc. trying very hard to give Hertz a worthy Avis at No. 2. Both have firmly established scientific public opinion polling in the Philippines as surely as Gallup once did in the United States. But that reputation and trust are like the reputation and trust that people accord to newspapers, tv, radio and journalism in general. And not all polls are created equal. Like the mass media itself, there are also "tabloid" polling practitioners -- like every absolutely meaningless radio and TV call-in poll, which are really "For the Entertainment of Fools only."
SURVEYS HAVE LOST THEIR STING Thus, the audience for surveys has matured as people have developed a healthy skepticism towards the surveys, while giving the same benefit of the doubt that they give to other journalists that report, analyze and interpret the vast and ever-changing entities called Current Events and the Public Opinion. Another survey? Hohum, most people say. This skepticism is well-founded, not because the pollsters are dishonest or their results inaccurate, but because people sense that the limitations of their craft. From a mere 1200 respondents, they are expected to divine great generalities about 42 million adult citizens. Plus, people know the logistics of conducting a survey are such that they are necessarily just snapshots of perhaps very fast moving events and changing situations. Sometimes their questions seem "biased" or "loaded" and people have seen this. Surveys have lost their sting. The power of the surveys have faded as people have become jaded.
POLLSTERS ARE PART OF THE MASS MEDIA People have learned that just because a public opinion pollster uses statistical and scientific techniques in the collection and analysis of its raw data, does not mean that its product is pure scientific information. People detect a familiar behavior on the part of pollsters that they find in ordinary journalists -- they editorialize on their own data. The Social Weather Stations reports are not given the same scientific value as, for example meteorological weather reports from Pag-asa or vulcanological bulletins from Phivolcs. One can see this difference clearly in the fact that we all seem to find ourselves "agreeing" or "disagreeing" with the published results of public opinion polls in a manner that never occurs when we hear the morning weather forecast or view a satellite image of the archipelago. Public opinion pollsters are now regarded by the public as closer cousins to newspapers and other forms of news and opinion journalism than to physical weather monitoring agencies. Public Opinion Polling Is A Genre of Journalism.
IMPORTANCE OF SURVEY AND QUESTION DESIGN: Because every public opinion survey is based on questions designed by the pollster, that pollster gets to analyse, report upon and interpret the results of the survey for the mass media and the general public. There is in other words, an editorializing function that pollsters also play. Just as newspaper editorials often interpret news and even other opinion, So too do public opinion pollsters make comments on their own data. Sometimes, I have noticed, the Media Release wrappers of both Pulse Asia and SWS do not accurately reflect what their own survey data actually says, when strictly and logically interpreted because the design of the survey, especially the number, manner, sequence and mix of questions that are asked has everything to do with the interpretation by the pollster and the subsequent reportage. This matter of the subsequent reportage also represents a very important effect. Few journalists and media people in the Philippines are qualified to properly interpret the statistical data collected in a survey, nor do many of them seem inclined to learn the mathematical and scientific basics, as evidenced by the truly ignorant reporting that goes on about what a survey says, sometimes from the Pollster itself!
2006 SWS NATIONAL SURVEY RESULTS AND SPIN Take for example the headline at the website of the Social Weather Stations today: First Quarter 2006 Social Weather Survey: Options For Toppling GMA: Coup Gets Split Opinions, People Power Gets 48%, Pro-Resign Gets 44%
SWS asked 1200 randomly selected adult registered voters from all over the Philippines whether they AGREED, DISAGREED or were UNDECIDED about the following statements;
(1) If President Arroyo resigns, it will be good for the country.
(2) It is good for the country if PGMA will be removed by a People Power.
(3) It is good for the country if PGMA will be removed by means of a military coup.
The results are summarized in TABLE 1 from the SWS website which i've copied below and also added a TOTAL column.
|SWS 2006 March |
It would be good for the country if...
|GMA is removed by a People Power||48||21||27||96|
|GMA is removed by a Military Coup||36||23||35||94|
As a scientist, I don't like throwing away good data or excluding them from the final report on a scientific investigation. I don't know how the missing 4%, 4% and 6% respectively break down between Don't Know and Refused Response, and in fact, with Undecided, these categories seem to belong to one called NEITHER, which produces the following more accurate reporting of SWS's own data...
|SWS 2006 March |
It would be good for the country if...
|GMA Resigns||44%||33% ||23%||100%|
|GMA is removed by a People Power||48%||25%||27%||100%|
|GMA is removed by a Military Coup||36%||29%||35%||100%|
First Quarter 2006 Social Weather Survey: Options For Toppling GMA: Coup Gets Split Opinions, People Power Gets 48%, Pro-Resign Gets 44%
Why did the SWS headline mention the 48% and 44% under "Resign Is Good" and "People Power Is Good" but not the 36% that "Military Coup Is Good" got?
I'm not sure if there was a sinister or technical reason. But as it stands, the headline GIVES THE IMPRESSION that (48+44) or 92% are actually for removing GMA by EITHER People Power or Resignation. That impression would not be given if the headline had read: "People Power 48%, Pro-Resign 44%, Military Coup 36%" because it would become obvious that SWS did not present mutually exclusive choices to the respondents since the questions they asked were in the form "Do you agree or disagree that it would be good for the country if GMA resigned" The fact of the matter is, SOME people who agree that "it would be good" if GMA resigned, also agreed that "it would be good" if GMA were ousted by People Power or a Military Coup. I suspect that a core of common respondents are in the AGREE and DISAGREE columns of all three questions.
WHEN THE UNDECIDEDS HAVE IT Hardly emphasized in either the SWS reporting or subsequent media reporting is the glaringly large percentage of the UNDECIDED or NEITHER as I've already pointed out. Whenever I run into a survey that has UNDECIDED percentages even larger than AGREE or DISAGREE, there are usually two possibilities:
(1) People don't care either way, or,
(2) Public Opinion on the survey subject is in a period of rapid change.
But just look at those double digit percentages of UNDECIDED in the 2006 March SWS survey. They are huge compared to say the margin of error of plus or minus 3% in these 1200-respondent surveys.
SPEAKING OF MARGIN OF ERROR... You have just stumbled upon one of my favorite topics in all of public opinion polling mathematics. What most people don't appreciate is that the Margin of Error in a scientific survey can be used as a kind of BULLSHIT DETECTOR on the validity and significance of any interpretation or editorializing that might be made of the raw statistical data. This topic is rich with popular misconceptions and vague concepts that a detailed discussion may be useful to Philippine Commentary Readers.
First look at the following "standard disclaimer" which comes from Pulse Asia Inc.'s recent Media Release on Chacha, but which also applies rigorously to SWS--
Based on a multistage probability sample of 1,200 representative adults 18 years old and above, Pulse Asia’s nationwide survey has a plus-or-minus 3 percent error margin at the 95 percent confidence level. Subnational estimates for each of the geographic areas covered in the survey (i.e., Metro Manila, the rest of Luzon, Visayas and Mindanao) have a plus-or-minus 6 percent error margin, also at 95 percent confidence level.WHERE DO THESE NUMBERS COME FROM? Have you ever asked yourself where these numbers come from and how to use them in reading the surveys? I suspect that most people do not know the answers to this question, yet they are they key to taking a public opinion pollster's data and making sense of them yourself, and not have to rely on the pollster or other media to interpret them for you!
MARGIN OF ERROR DEPENDS ON SQUARE ROOT OF SAMPLE SIZE The first thing to understand is why the "nationwide survey" has a plus or minus 3% error margin while the "subnational estimates for Manila, Luzon, Visayas and Mindanao" have a plus or minus 6% error margin. The answer is this. The nationwide survey takes into account ALL of the 1200 respondents randomly picked from the total population of over 40 million adult registered voters in the Philippines, while each of the "subnational estimates" are from the responses of the smaller component group of 300 respondents each, that are randomly picked from NCR, rest of Luzon, Visayas or Mindanao populations. But why plus or minus 3% and 6% respectively when the subnational samples are one fourth of the size of the total nationwide survey. The reason is mathematical. A simple and famous formula for estimating the statistical error margin says it is equal to 100% divided by the SQUARE ROOT OF THE SAMPLE SIZE. (Note that the "plus or minus" comes from the elementary mathematical fact that every positive number has TWO square roots, one that is greater than zero and another that is less than zero. But for example, negative 34.641 times negative 34.641 equals 1200, just as much as positive 34.641 times itself equals 1200).
WHERE THE 3% COMES FROM Thus for a random sample of 1200 respondents, the error margin would be 100% divided by (plus or minus 34.641) which equals plus or minus 2.89% -- This is where the usual error margin plus or minus 3% comes from, after being rounded up.
Next for a random subnational sample of 300 respondents (4 of which make up the national sample) the error margin would be 100% divided by (plus or minus 17.32) which equals +/-5.77% -- this is where the plus or minus 6% comes from for the national subsamples, after being rounded up.
OF WHAT USE IS THE STATISTICAL ERROR MARGIN? Ask first a different question: Why do SWS and Pulse even bother to ask 1200 people. Why not ask say just 100 people, which would make all that nasty percent arithmetic easy to do in one's head. Well according our formula, if SWS conducted a 100-respondent survey, each of the statistics they measure would have a plus or minus ten percent error margin. If they asked a hundred people whether they agreed it would be good for Gloria to resign, and 44% of them said yes, it would only mean that 95% of the time they conducted such a survey, the results could easily fluctuate between a low of 34% and a high of 54% -- which would really make the survey useless wouldn't it? This is one important use of the error margin -- knowing if the sample size is reasonably large enough for the question being asked and the precision needed in the answer.
NET SATISFACTION RATING HAS TWICE THE STATISTICAL ERROR MARGIN But there is another very important application of the statistical error margin that the pollsters have assiduously avoided explaining or stressing to the public. It has to do with what is called NET SATISFACTION RATING, which is equal to the DIFFERENCE between those who say the are satisfied with the President's performance and those who say they are dissatisfied with her performance. What most people do not know is that the NET SATISFACTION RATING contains TWICE the statistical error margin as does the component statistics! That is because there is no free lunch in the mathematical statistics that underpins this social science of public polling. Whenever you COMBINE those statistics that contain your STANDARD error margin together, whether you add, subtract, multiply or divide them, you must ADD the individual error margins together to get the error margin of the calculated result.
In other words, the NET SATISFACTION RATING numbers bandied about have a PLUS OR MINUS SIX PERCENT error margin in them. And if you take the difference between two calculations of the NSF from two different surveys, the resulting CHANGE in NSF will have an error margin of plus or minus TWELVE PERCENT.
So, did the President's Net Satisfaction Rating change from December 2005 to March 2006? Here is the SWS item from their recent national survey --
The March 2006 Social Weather Survey finds 29% satisfied and 54% dissatisfied with the performance of President Gloria Macapagal-Arroyo, for a Net Satisfaction Rating of -25.Since the statistical error margin for the Net Satisfaction Rating is plus or minus 6% the only scientific conclusion that can be drawn is that the NSF did not change during this period because the measured change of 5% is smaller than that in magnitude.
Although negative for the seventh consecutive quarter, the President's new rating is a little less bad than in December 2005 when fewer (24%) were satisfied, and the same proportion (54%) were dissatisfied, for a Net Satisfaction Rating of -30.