Thursday, November 23, 2006

Pulse Asia Survey: No News Is No News

OR, Why the Survey Firms Invented the Net Satisfaction Rating

PULSE ASIA reports on a year's worth of polling on the Performance and Trust ratings of the President from October 2005 to November 2006 (Ulat ng Bayan surveys)...
...President Arroyo’s overall performance ratings are not significantly different from those recorded in July 2006 ...

...There is hardly any change – positive or negative – in the President’s overall trust ratings between July and November 2006...
I must say, they try harder nowadays at the Avis of Public Opinion Pollsters. The statistical parameters of the survey are clearly laid out in the beginning of the Press Release, namely the number of respondents (1200) and the built-in statistical margins of error for national results (+/-3%) and sub-national results (NCR, Luzviminda +/-6%). Even the Confidence Level (95%) is mentioned, but let's ignore what that is for now. The beginning of the Pulse Asia Press Release reads like the label on a Medicine Bottle, which is the way it should be.

A point worth clarifying for Philippine Commentary readers is this: the Statistical Margin of Error is present even if the survey is conducted perfectly from a clerical and mechanical point of view. In other words, the Statistical Margin of Error is in addition to any "mistakes" that the survey personnel and data collectors might actually make in polling, counting, recording or otherwise processing the statistical raw data. The reason of course is that the Statistical Margin of Error is not due to "mistakes" or "sloppiness" but merely reflect the innate imprecision created by a RANDOM SAMPLE based survey.
Armed with this information, any intelligent layman can completely understand the two Tables containing, in a neat and compact format, the data from a whole year's worth of Pulse Asia's polling on the "Performance" (Table 1) and "Trust" (Table 2) Ratings of the President.

I must congratulate Pulse Asia for including a column in their Tables showing the CHANGE in the survey statistic from the last survey period. This helps the intelligent survey peruser to decide what level of significance or importance to put on result. If the change is less than or about equal to the relevant Margin of Error, then the apparent change may only be due to the expected fluctuations induced by the finite Random Sample Size. This format makes it easy to see that there is remarkable stability to the Performance and Trust Ratings as measured by the pollster.

I have very little to gripe about this time on the way Pulse Asia reported on its survey quarter. We must however bear in mind that the actual distribution of public opinion on the two Questions posed, (whether respondents approved of the President's performance and the amount of trust they put in her) are NEVER going to be measured directly and objectively in some independent process like an election or plebiscite. Therefore the results of these polls will never actually be put to some kind of verification test, unlike for example, exit polls or voter preference polls during the campaign period.

Why Net Satisfaction Rating Was Invented It may also be useful Philippine Commentary readers to see how the "Performance Ratings" such as above, are related to something called the Net Satisfaction Rating or Net Approval Rating, a calculated statistic that is equal to the DIFFERENCE between the percentage that approves and the percentage that disapproves of the President's performance. I have calculated what the Net Satisfaction Rating of the President would be based on the data in Table 1 of the Pulse Asia Survey. What happens is that a new statistic is created that looks like things are CHANGING a lot from one Quarter to the next, but it's all an illusion because it's the same data.
Pulse Asia
Ulat ng Bayan Survey
Time Series Data
Oct 2005Mar 2006Jul 2006Nov 2006

Approve Minus Disapprove
"Net Approval Rating"
"Net Satisfaction Rating"
"Net Performance Rating"


Now do you see why the survey outfits invented the Net Satisfaction Rating? It's because NO NEWS doesn't sell, whereas the NSR has more of a built in variation because it contains the variations in the two quantities being subtracted from one another. It's hard to turn the status quo into a headline, but since the pollsters don't usually tell you that the Margin of Error in the NSR is actually twice what the normal margin of error is, and the Media have refused to understand it, we get sensational reports of "plunging" ratings, or "suddenly soaring" ratings.

I bet you if you check the newspaper headlines around last July, some of them probably said something like: President's Net Performance Rating Jumps by 6% from March.

The Net Satisfaction Rating is a product of the public opinion pollsters Media Bureau, designed for the Art of Making Up Headlines even when No News is No News!

But to its credit, Pulse Asia chose not to stoop to that cheap arithmetic trick called the Net Satisfaction Rating this time and stuck to its raw data, which is illuminating in its own right, as all scientific data should be. It is a cardinal rule of Science to try as much as possible to let the data speak for itself.


manuelbuencamino said...


I looked at the tables and Ii was wondering if I am reading it properly.

For example in Table 1. In NCR the approval rating went down from 25 to 19, the disapproval rating went up frpm 50-56 and the undecided remained w/in the statistical margin of error. So can I conclude that gma's lost support was the opposition's gain?

In Mindanao approval went from 20 to 26, dispproval remained the same but undecideds went from 31 to 25. Does that mean gma won over the undecided but she didn't make any dent with the opposition?

Mlq and I believe the fulcrum is the undecideds. The undecideds show very interesting movements over the entire period covered by the survey in Table 1 but not as much in Table 2.

Do you spot any trends? Can you correlate these numbers to specific news events or certain seasonal occurences?

If you were a political strategist how woould these figures guide you?

How about another post giving as an your analysis?

manuelbuencamino said...


Can one correlate Tables 1 and 2 to Table 6 ? If so. how?

Rizalist said...

You are quite right about the undecideds. They end up deciding sooner or later and that is what causes the picture change when they do. The NSR is a statistic for getting rid of them for now! It is a way of taking them out of the equation for the purposes of the headline making.

One thing to keep in mind when looking at the subnational data (Ncr or Luzviminda) is that the statistical error is plus or minus 6% or twice the normal, because each subnational region only has 300 respondents and therefore a bigger MOE.

Regarding changes in the numbers and which categories they come from, it is not possible to say. Remember each survey uses a DIFFERENT random sample of respondents from one quarter to the other.

So even if it looks like the undecideds "went over to the approve column" when it went from 20 to 26 and undecided went down from 31 to 25, there is no way to actually know how it happend. It is possible that 12% undecided actually became approve but 6% approve also became undecided result in the change we saw. But it is equally possible that 3% undecided became approve, 3% became disapprove, and 3% disapprove also became approve, but 3% undecided became disapprove!

But the most amazing thing really is that there was no significant change in the relative percentages throughout the whole year.

ANALYSIS: Such stasis is normal far away from elections.

Dave Llorito said...

dr anna tabunda is thorooughly professional; that is why pulse asia doesnt seem to extrapolate analysis beyond what the data says.

one major problem with surveys in the philippines is the large percentage of the undecideds. that is also with surveys by social weather stations. i heard mahar mangahas one time saying that this large percentage of undecideds needs to be interpreted because it means something that he has not figured out. maybe some rocket scientists out there could illuminate us one day, he said. but right now, it remains a puzzle.

Rizalist said...

The large percentage of undecideds usually occur in "nonscientific questions" that won't ever be put to the test of an election or plebiscites.

But the large percentage of undecideds is not always a sign of a badly designed or ambiguously posed question. It is also possible that people don't CARE to make a decision about the issue implied in the question.

For example, if asked about how much we trust the President, many people honestly never thought of it. And if they start thinking about it only during the survey questionnaire, they may decide they are undecided for no other reason that they are!

It is a perfectly valid finding. Indeed the outstanding fact is the relative stability of the percentage claiming to be undecided.

But look at "voter preference surveys" such as the ones during the runup to an election. The % of undecided is usually in single digits and gets even smaller. significantly as the election nears.