Citizen satisfaction with service delivery So how satisfied are South Africans with service delivery in this country?
S&T recently completed a citizen satisfaction (CS) survey on behalf of the
Public Service Commission (PSC) to gauge satisfaction levels amongst citizens
who had experienced the services delivered by four national departments, namely
the departments of Education, Health, Housing and Social Development. The overview
report and the separate reports for each department can be downloaded from the
PSC website: www.psc.gov.za
The CS instrument used in the study assessed the variance between what a citizen expects the service delivery to be and what the citizen actually encountered. With this information departments will be better placed to meet the expectations of the recipients of that service.
An integral part of identifying the gap between perception and expectation, is what does one do once the gap has been identified? In the remainder of this article we explain how this information can be used in shaping and implementing a service delivery implementation plan (SDIP).
Service delivery improvement plan
The diagramme above illustrates that the four crucial components to ensuring that services delivered by any government department are of sufficient quality to close the gap between the expectations of citizens and actual performance.
Firstly, it is important to understand citizens' expectations of government service delivery - measuring this informs each department about the perceptions citizens have of service delivery. Secondly, it is important to evaluate the actual service delivered by departments. Thirdly, departments can draw on various tools in what we have called a Government Service Improvement Toolbox. The toolbox contains a wide variety of measures, which departments can implement, or have already implemented, to improve service delivery. Fourthly, departments must engage in ongoing performance measurement of staff and business units against agreed upon service delivery standards. The following discussion addresses these four components and suggests ways in which the information can be applied in a practical manner to assist departments in their mission to improve the services delivered to citizens in this country.
It is worth reiterating that the Batho Pele White Paper (1459 of 1997) outlined a clearly defined implementation strategy for transforming public service delivery. It mapped out the process that should be undertaken to improve service delivery with the emphasis placed on consulting citizens. The implementation of an SDIP consists of eight key steps, four of which were used in the development of the CS survey referred to above, namely:
- Identify the recipients of the service,
- l Establish the needs and priorities of citizens who receive the service,
l Establish the current service baseline, and
l Identify the gap between current levels of service and the service levels citizens would like the public sector to strive for.
The remaining four steps of the SDIP, which are elaborated below, are:
l Set service standards,
l Gear up for delivery,
l Communicate the service standards, and
l Monitor delivery against the established standards and publish the results of this performance measurement.
Setting service standards
Service standards are fundamentally different from citizens' expectations. Service standards are indicators of the best level of service delivery a department can realistically provide given the resources available. Good service standards are meaningful to citizens and are developed with citizen expectations and input in mind. However, citizen expectations may or may not be realistic nor might they be at appropriate levels (as was noted earlier when examining expectations in the results section of this report). This makes it difficult to transform levels of expectation into standards, given the resources and the mandate of a particular department. The key is to use levels of expectation as a guide for setting standards, but it is ultimately up to a department to set the standards it wants to strive for.
It is possible for each department to produce data that demonstrate that it is meeting its service standards at a very high level (e.g. number of citizens receiving housing subsidies), and yet citizens may remain dissatisfied. By surveying citizens about their expectations and satisfaction levels each department has valuable information to use to bring the expectations of the citizens and the respective departments' own service delivery experiences closer in line. However, it is important to note that features of service delivery which receive the lowest ratings (i.e. where the gap between actual service delivery and expected service delivery was large) will not necessarily be the citizens' priorities for improvement. Having citizens indicate the importance of each of the features will help each department prioritise and thus be better placed to allocate resources to the improvements that will have the greatest impact.
Setting service standards based on benchmarks derived from both citizens' input and other quality assurance methods is an important output of a CS survey. The key steps in this process of establishing the benchmark and then measuring service delivery against it are shown below:
Key Steps in Benchmarking
Survey service delivery to get baseline score and understand which features of the service delivery are most important to those receiving the service (the latter helps a department to prioritise issues)
Are these present levels of service delivery acceptable to a department in terms of its mission, Service Delivery Charter, Batho Pele etc.
Set that level as the benchmark score
Then ask: how do we maintain the current service delivery?
Ask what is an acceptable score, and set it as the benchmark score.
Then ask: what needs to be done to achieve acceptable service delivery? What would it take to fully satisfy citizens making use of this service
Measure on a continuous/ regular basis to assess whether service delivery is close to/ meets the benchmark
Once benchmarks have been set a department is now in a position to compare:
- Service delivery between different delivery points in the same business unit,
- Different business units within the department, and
- Differences in performance between itself and other departments in the public sector.
However, the primary purpose of the baseline scores is to give managers a benchmark against which future performance can be measured in quantifiable form. The value of this benchmark data is that it allows managers to build this information into their performance management systems. It also allows managers to assess where their strengths and weaknesses are in terms of service delivery. By receiving a report card on their service delivery mangers will be able to prioritise their work and target areas of performance that are unsatisfactory, and determine the importance and quality of the service as perceived by citizens.
Gearing up for delivery
Probably the single most important aspect of gearing up for improved service delivery is the guidance, leadership and advice provided by managers to their business units. Two other important aspects are the structure of the organisation and the workforce who deliver the service. All three are discussed below.
Effective management of service delivery demands sound decision-making. Decision-making is only as accurate and reliable as the information on which it is based. The importance of management in the quality improvement plan is twofold. Firstly, the adaptation of any organisation to change - particularly a department that is engaged with transforming itself into a learning organisation focussed on service delivery - requires managers who are skilled and committed to the vision of the department.
Managers design quality improvement initiatives and also have to ensure that their staff implement the initiatives. It is vital that managers within each department are supportive of the quality improvement initiatives and promote an organisational culture that supports transformation. However, of equal importance is the need to accept that not all areas of performance can be improved simultaneously that management prioritises areas for improvement. A department must remain focussed on specific areas rather than trying to "do everything" too quickly.
Secondly, management can drive quality improvement by way of a systems approach. Perhaps the most innovative approach to quality improvement has been the increased focus on system-related problems rather than on isolated problems. Examples of the system-related approach (a crucial tool in the Government Service Improvement Toolbox) include Continuous Quality Improvement (CQI) and Total Quality Management (TQM), which have illustrated the importance of examining all the steps involved in the delivery of a service, thereby systematically identifying problems in the process and enabling management to identify opportunities for restructuring the process.
Further value in the system-related approach can be found in the fact that this approach emphasises the use of teams to drive quality improvement. The effective functioning of the team plays a crucial role in quality improvement, and it is important that management nurtures the team.
Not all change is improvement, but all improvement requires change
(Department of Health, 2000; 54)
Gearing up departments for service delivery began several years ago (all government departments have already implemented social delivery improvement programmes) but the data gathered by S&T using the CS survey suggests there are areas of vital importance still needing to be addressed. Departments must ensure that the CS survey not only helps with acquiring new knowledge about the impact of services delivered, but that it also guides each department in finding solutions in problem areas.
It is therefore important for senior managers to ensure that appropriate strategies are in place to manage change in the organisation. This is in line with literature on improving the delivery of services, which notes that initiatives are far more likely to succeed when the processes or systems are improved, rather than "blaming" individuals who underachieve. Organisations that promote team work, partnerships and joint responsibility find it far easier to improve service delivery than those that focus on the individual. A recent National Department of Health Report (2000) noted that:
“The experience of other industries that have tried to address quality problems is that defects in quality are rarely attributable to a lack of will, skill, or malign intention among the people involved in the production process. In most cases, problems are built directly into a complex production process. Even when people are at the root of defects, the problem is generally not one of motivation or effort, but rather of poor job design, failure of leadership, or unclear purpose” (54).
A study performed elsewhere in the public service sector noted that poorly structured organisations with complex internal processes can severely hamper service delivery. Typically these organisations are characterised by high wastage and large numbers of error-ridden processes. Where organisations have been able to successfully restructure and modify processes, wastage has decreased and errors have been reduced, simultaneously leading to improved service delivery.
It is generally accepted that for every rand not spent in preventing typical service delivery errors or problems, organisations can expect to spend at least ten times more - and as much as 100 times more - fixing things after an error occurs.
For example, readers will be aware that once a problem has occurred and a citizen takes the initiative to write a letter to their MEC or senior manager in a national department, and/or involves the media, the costs of addressing and responding to the problem can be alarming and escalate dramatically.
||+ Prevention Costs
||+ Inspection & Correction Costs
||+ Field Problem Costs |
- Gathering & using citizen feedback
- Fixing errors BEFORE they reach citizens
- Fixing errors AFTER they reach citizens
||30-50% of costs in a typical service delivery organisation|
Figure 1: The costs associated with quality service delivery
Figure 1 illustrates the costs associated with service delivery, and demonstrates how these costs increase exponentially the longer it takes to deal with the problem. The first two costs are unavoidable as they are incurred when the service is produced (i.e. production costs and the systems put in place to ensure effective and efficient delivery). If the systems in place (such as appropriate skills development, a CS survey and a functioning complaints process) are working to their full potential then any problems encountered will be dealt with promptly.
However, if the systems are not in place then other costs are incurred. The first are what we have referred to as inspection and correction costs. These are typically internal measures, which "inspect" service delivery to ensure that any problems that arise can be dealt with before they are actually delivered (e.g. reprinting a manual that is incorrect, but before it has been distributed). The second, which are typically very expensive to rectify, are those costs that a department will face if it has to rectify the service delivery after an unsuccessful attempt has been made to deliver it (e.g. recalling an incorrect manual or guideline after it has been delivered to all the staff of a department).
The most effective strategies to improve service delivery are those that involve staff in both the planning and implementation of service delivery improvement strategies. Any strategy to improve service delivery within a department will need to engage with employees of the department, by:
- Revising the training of staff to incorporate service delivery improvement initiatives
- Understanding and being sensitive to the impact of necessary changes and, or restructuring of the organisation on staff and
- Incorporating service delivery standards into individual performance agreements and/or service level agreements.
Communicating the service standards
A crucial component of creating an environment within which quality service delivery will flourish is to strengthen those who benefit from the delivery of services by a department. However, to communicate the information, both internally and externally, gathered by the CS survey requires a robust communication strategy. It is crucial that the communication strategy provides an appropriate feedback loop that will ensure that information is fed both upwards within a department to allow managers to make informed decisions, and downwards to keep citizens informed about the initiatives a department is embarking upon as part of its SDIP. Components of a communication strategy would include:
- Citizen education - i.e. make sure that staff educate citizens about what services a department offers, thus ensuring that citizens use the services in a manner that the departments have deemed appropriate.
- Provide citizens with reliable and up-to-date information on quality that meets their needs.
- Provide citizens with information pertaining to the services a department has targeted for improvement.
- Create effective information dissemination mechanisms that ensure citizens receive important information about each department.
- Provide assistance to those citizens who require help in making informed decisions about the type of service they would like to access.
- Identify opportunities for citizens to be involved in the governance and oversight of services delivered by each department.
- Inform citizens whether the service charter of a department will be updated based on the results of the survey.
- Commission research on promoting effective use of information by citizens.
Monitoring delivery should be a core component of every department's performance monitoring system. This is done by repeating the survey cycle once necessary changes have been implemented and continuing to do so on a regular basis and smaller diagnostic studies as required. By comparing the original or baseline measures with newer information, a department can track the success or failure of its service delivery improvement efforts.
Citizen ratings of overall satisfaction with performance can be measured through a series of surveys to evaluate the success of the SDIP process over time. As Figure 2 illustrates the gap between the performance rating and expectations should diminish as the department identifies and resolves service delivery gaps. By comparing the baseline measures with newer information, the department can track the success of its efforts and continually adjust to the changing needs of its citizens. Figure 2 shows how the gap between performance rating and expectations of the citizens should diminish as the SDIP is implemented.
This article has demonstrated how information gathered by a reliable and valid CS survey can be used by a department to shape and implement a SDIP. By carrying out these steps departments will ensure that a CS survey plays an important role in fostering a service delivery culture throughout the public service.
Department of Health. September 2000, A Policy on Quality in Health Care for South Africa. 2nd Draft. National Department of Health, Pretoria.
Dutka, A. 1994. AMA Handbook for Customer Satisfaction. NTC Publishing Group: Illinois.
Health Services Research Group, 1992. A guide to direct measures of patient satisfaction in clinical practice. Canadian Medical Association Journal, 146(10), 1727-1731.
Public Services Commission Report: Survey of Compliance with the Batho Pele Policy, 2000.
Public Services Commission Report: Monitoring and Evaluation Scoping Project, February 2001.
White Paper on Transforming Public Service Delivery, No. 1459 of 1997.