Blogs

How the LGA have shaken up local government public opinion polling

Back in 2010 the Government abolished the Place Survey – a statutory public opinion poll all English local authorities had to undertake. This freed us to decide on the polling we wanted to do appropriate to our areas and what we wanted to achieve. At their best these surveys help evaluate performance and provide the tools and options for decision-makers to deliver real change to local people and their area. They provide context, evidence of behavioural changes and demands, and give a voice to those who otherwise are not engaged by public meetings or responding to specific consultations.  At their worst they are, in the words of Grant Shapps, the DCLG Minister at the time who abolished the Place Survey, “a cosmetic exercise which never changes anything.”

The LGA have now published guidance on how local authorities might carry out public opinion polling of their residents (available here). If a local authority follows the guidance, they can upload their findings onto the LG Inform online platform and compare with other local authorities, or with a range of other performance indicators. The challenge now is to ensure that this does not just become an exercise in creating some spreadsheets of benchmarks, that never changes anything, but instead enables local government decision makers to have the information they need to understand local circumstances.

How things have been shaken up

In responding to the needs of local authorities, the LGA have also subtly shaken up the world of local government public opinion polling methodology. They have established new improved standards in the polling we do by challenging some long-held practices. The first challenge is that we’ll have to get used to the concept of an effective base. In Westminster this lowers the base size of our 2008 postal Place Survey from 1,576 to 882. The reason is the heavier the weights used in a survey to adjust for non-response among key demographics the lower your effective base. For LG Inform the effective base of all data submitted has to be 500 or more. This is not the number of interviews but should be the number used to work out statistical reliability. It is rarely used (try Googling for it) but is a good tool for ensuring data quality.

A postal survey we can excuse for needing heavy weights due to voluntary nature of responses and differential response rates. However, when local authorities commission a face-to-face or telephone survey and set quotas to ensure a representative spread of interviews it is less acceptable to need heavy weights. Often the reason some companies can provide very cheap quotes on surveys is they rely too heavily on weighting the data. Rather than working extra hard to reach the most difficult to reach they fill their quotas with easier people to reach and simply say the data will be weighted to the known profile. This was the excuse used in the official Place Survey guidance but it does not make it right.

In Westminster our latest resident survey of 500 residents had an effective base of 457, good in the London context of a very difficult area to survey. For a local authority in the north of England we have worked with their survey of 500 had an effective base of 479. In the big scheme of things this does not really impact on the reliability of the findings but in future we will, as the LGA suggests, have to aim for say 550 interviews to meet their criteria. For others the challenge will be bigger. I’ve seen surveys I haven’t run of 500 residents that have an effective base of less than 400. As a sector we should be commissioning the effective sample size and this applies not just to surveys around 500 but bigger surveys too. If you commission 1,000 interviews you should get 1,000 not technically 700 or 800 as some probably currently do. It is rare in the market research industry to ask for this and will place us at the top of our game in doing so.

What is an effective base?

It is worth bearing in mind that if you weight the data in any way you will have a lower effective base. To calculate the effective base you need to have a full list of the individual weights applied. Each respondent will have their own weight. If it is less than one their findings are weighted down in the data as they are over-represented. If it is more than one they are weighted up as they are under-represented. All you need to do is divide A by B to calculate the effective base. A is calculated by adding up all the weights and then squaring the total. B is calculated by squaring each of the weights individually and then adding up them up. I am very thankful to Populus for helping me find a way to easily explain an effective base. The onus should be on the supplier to ensure compliance, but local authority researchers now have a clear quality standard to commission to and a means to check it.

Changing the wording of the questions we ask

The second challenge will be to change the questions we ask residents. If you have not done so already I strongly recommend the background study, Are you being served? As someone who started their career at Ipsos MORI working on residents’ surveys and the very first BVPI General Surveys of 2000 (hands up who remembers them!) the report brings back some very warm memories of some old friends and the old way of doing things. It is like a complete history of local government resident surveys and goes into great depth as to why we have asked certain questions and used the approach we have. As now an old man of local government polling, I very reluctantly change my views on correct question wording. Luckily the recommended questions are sensible and familiar. Some questions I just need to tweak to bring in line, a word here or there, others require more of a leap of faith.

The boldest move is not just to specify that the first three questions are consistent and unmovable, but that there are substantial preambles. My training, at least in my mind, taught me to loath preambles. I prefer short and sharp questions that capture top of mind views. If someone does not know what their council does, or is thinking of another organisation altogether, so be it. I’m measuring the perception of the council no matter how confused it might be. But the world moves on and I have to move with it. The testing done for the LGA showed that people appreciated the little bit of guidance in the preamble, and I know that many people disagree with me on the use of preambles. Will I include preambles in future surveys? Yes. I just have to learn to love them.

But I hear you ask what about the long-term trends? If I bring preambles on board and start shifting questions around, will I not be damaging the comparisons within my authority and the others I work with? My plan is to use split samples for the introduction of the new opening question bank. Half the survey respondents will randomly be asked the new questions, and half the old questions I’ve asked before in Westminster. I expect the results to be the same and I will just merge the data of the two versions of the questions and report that internally. But, if there are substantial differences I can use that information to help bridge the old data into the new world of LG Inform. It will mean my first survey will not be compliant, but subsequent surveys will be. For my other client authorities I will either have the luxury of starting completely afresh or will help them make the change if they want to. For many a big bang approach will be perfectly appropriate. All local authority survey providers should be able to help advise on the possible transition.

Make the findings count

The third and final challenge is to make the findings count. Benchmarking should provide context not a comfort blanket to explain away poor performance. Each local authority that commissions public opinion polls should ensure that it meets the needs of the local authority and the people they serve. Up until this point the only additional context for this polling has been from sharing with other councils informally or using the LGinsight/Populus national polling findings (latest findings available here from May 2012). Hopefully this new LGA initiative will encourage us to share more within a robust framework. The LG Inform platform contains a wealth of other data sources which helps provide further context to the polling. We have to be honest. Did the previous wave of comparative Place Survey data really drive service improvement? As a top performing authority on the survey only a few authorities asked us how we did it. Most authorities observed their performance in the comparative data and wanted to be in-line with everyone else. They did not feel they understood the reasons driving performance. How many targets for improvement were set with no real plan or understanding of how to do it?

The real value in the LG Inform platform and the standardised polling will be to open up debate and discussion to understand differences. Through the LGA Knowledge Hub and in particular the regional LARIA network. I hope local researchers will seek to improve their performance by talking to others. It’s important not to keep going to the LGA to sign off or approve our surveys. We need to be confident as local researchers in supporting each other. The DCLG and Audit Commission in the past were driven mad by the incessant questions and queries from local councils over points of methodology. The staff in the LGA team are very helpful, but we should not over burden them. I was personally surprised by some of the questions asked in the consultation. It showed to me that some local authority researchers need to improve their skills in managing public opinion research. I would include myself in this as well and already this process has identified for me some useful points to consider. The LGA have raised the bar and I will have to work harder to meet it. I hope that the sector genuinely embraces this challenge. It will mean not only far more robust local government public opinion polling, but improved outcomes for the people and organisations we serve.

More Blog Entries

Interesting post. We are currently looking at ways in which we capture citizens views surrounding diversity and equality.