the team
Strategy & Tactics logo    

September 2003

Citizen satisfaction with service delivery

So how satisfied are South Africans with service delivery in this country? S&T recently completed a citizen satisfaction (CS) survey on behalf of the Public Service Commission (PSC) to gauge satisfaction levels amongst citizens who had experienced the services delivered by four national departments, namely the departments of Education, Health, Housing and Social Development. The overview report and the separate reports for each department can be downloaded from the PSC website: www.psc.gov.za

Citizen satisfaction with service delivery  image

The CS instrument used in the study assessed the variance between what a citizen expects the service delivery to be and what the citizen actually encountered. With this information departments will be better placed to meet the expectations of the recipients of that service.

An integral part of identifying the gap between perception and expectation, is what does one do once the gap has been identified? In the remainder of this article we explain how this information can be used in shaping and implementing a service delivery implementation plan (SDIP).

Service delivery improvement plan

The diagramme above illustrates that the four crucial components to ensuring that services delivered by any government department are of sufficient quality to close the gap between the expectations of citizens and actual performance.

Firstly, it is important to understand citizens' expectations of government service delivery - measuring this informs each department about the perceptions citizens have of service delivery. Secondly, it is important to evaluate the actual service delivered by departments. Thirdly, departments can draw on various tools in what we have called a Government Service Improvement Toolbox. The toolbox contains a wide variety of measures, which departments can implement, or have already implemented, to improve service delivery. Fourthly, departments must engage in ongoing performance measurement of staff and business units against agreed upon service delivery standards. The following discussion addresses these four components and suggests ways in which the information can be applied in a practical manner to assist departments in their mission to improve the services delivered to citizens in this country.

Batho Pele

It is worth reiterating that the Batho Pele White Paper (1459 of 1997) outlined a clearly defined implementation strategy for transforming public service delivery. It mapped out the process that should be undertaken to improve service delivery with the emphasis placed on consulting citizens. The implementation of an SDIP consists of eight key steps, four of which were used in the development of the CS survey referred to above, namely:

  • Identify the recipients of the service,
  • l Establish the needs and priorities of citizens who receive the service,
    l Establish the current service baseline, and
    l Identify the gap between current levels of service and the service levels citizens would like the public sector to strive for.
    The remaining four steps of the SDIP, which are elaborated below, are:
    l Set service standards,
    l Gear up for delivery,
    l Communicate the service standards, and
    l Monitor delivery against the established standards and publish the results of this performance measurement.

Setting service standards

Service standards are fundamentally different from citizens' expectations. Service standards are indicators of the best level of service delivery a department can realistically provide given the resources available. Good service standards are meaningful to citizens and are developed with citizen expectations and input in mind. However, citizen expectations may or may not be realistic nor might they be at appropriate levels (as was noted earlier when examining expectations in the results section of this report). This makes it difficult to transform levels of expectation into standards, given the resources and the mandate of a particular department. The key is to use levels of expectation as a guide for setting standards, but it is ultimately up to a department to set the standards it wants to strive for.

It is possible for each department to produce data that demonstrate that it is meeting its service standards at a very high level (e.g. number of citizens receiving housing subsidies), and yet citizens may remain dissatisfied. By surveying citizens about their expectations and satisfaction levels each department has valuable information to use to bring the expectations of the citizens and the respective departments' own service delivery experiences closer in line. However, it is important to note that features of service delivery which receive the lowest ratings (i.e. where the gap between actual service delivery and expected service delivery was large) will not necessarily be the citizens' priorities for improvement. Having citizens indicate the importance of each of the features will help each department prioritise and thus be better placed to allocate resources to the improvements that will have the greatest impact.
Setting service standards based on benchmarks derived from both citizens' input and other quality assurance methods is an important output of a CS survey. The key steps in this process of establishing the benchmark and then measuring service delivery against it are shown below:

Key Steps in Benchmarking

Survey service delivery to get baseline score and understand which features of the service delivery are most important to those receiving the service (the latter helps a department to prioritise issues)

Are these present levels of service delivery acceptable to a department in terms of its mission, Service Delivery Charter, Batho Pele etc.


Set that level as the benchmark score
Then ask: how do we maintain the current service delivery?


Ask what is an acceptable score, and set it as the benchmark score.
Then ask: what needs to be done to achieve acceptable service delivery? What would it take to fully satisfy citizens making use of this service

Measure on a continuous/ regular basis to assess whether service delivery is close to/ meets the benchmark

Once benchmarks have been set a department is now in a position to compare:

  • Service delivery between different delivery points in the same business unit,
  • Different business units within the department, and
  • Differences in performance between itself and other departments in the public sector.

However, the primary purpose of the baseline scores is to give managers a benchmark against which future performance can be measured in quantifiable form. The value of this benchmark data is that it allows managers to build this information into their performance management systems. It also allows managers to assess where their strengths and weaknesses are in terms of service delivery. By receiving a report card on their service delivery mangers will be able to prioritise their work and target areas of performance that are unsatisfactory, and determine the importance and quality of the service as perceived by citizens.

Gearing up for delivery

Probably the single most important aspect of gearing up for improved service delivery is the guidance, leadership and advice provided by managers to their business units. Two other important aspects are the structure of the organisation and the workforce who deliver the service. All three are discussed below.


Effective management of service delivery demands sound decision-making. Decision-making is only as accurate and reliable as the information on which it is based. The importance of management in the quality improvement plan is twofold. Firstly, the adaptation of any organisation to change - particularly a department that is engaged with transforming itself into a learning organisation focussed on service delivery - requires managers who are skilled and committed to the vision of the department.
Managers design quality improvement initiatives and also have to ensure that their staff implement the initiatives. It is vital that managers within each department are supportive of the quality improvement initiatives and promote an organisational culture that supports transformation. However, of equal importance is the need to accept that not all areas of performance can be improved simultaneously that management prioritises areas for improvement. A department must remain focussed on specific areas rather than trying to "do everything" too quickly.

Secondly, management can drive quality improvement by way of a systems approach. Perhaps the most innovative approach to quality improvement has been the increased focus on system-related problems rather than on isolated problems. Examples of the system-related approach (a crucial tool in the Government Service Improvement Toolbox) include Continuous Quality Improvement (CQI) and Total Quality Management (TQM), which have illustrated the importance of examining all the steps involved in the delivery of a service, thereby systematically identifying problems in the process and enabling management to identify opportunities for restructuring the process.

Further value in the system-related approach can be found in the fact that this approach emphasises the use of teams to drive quality improvement. The effective functioning of the team plays a crucial role in quality improvement, and it is important that management nurtures the team.

Not all change is improvement, but all improvement requires change
(Department of Health, 2000; 54)


Gearing up departments for service delivery began several years ago (all government departments have already implemented social delivery improvement programmes) but the data gathered by S&T using the CS survey suggests there are areas of vital importance still needing to be addressed. Departments must ensure that the CS survey not only helps with acquiring new knowledge about the impact of services delivered, but that it also guides each department in finding solutions in problem areas.
It is therefore important for senior managers to ensure that appropriate strategies are in place to manage change in the organisation. This is in line with literature on improving the delivery of services, which notes that initiatives are far more likely to succeed when the processes or systems are improved, rather than "blaming" individuals who underachieve. Organisations that promote team work, partnerships and joint responsibility find it far easier to improve service delivery than those that focus on the individual. A recent National Department of Health Report (2000) noted that:

“The experience of other industries that have tried to address quality problems is that defects in quality are rarely attributable to a lack of will, skill, or malign intention among the people involved in the production process. In most cases, problems are built directly into a complex production process. Even when people are at the root of defects, the problem is generally not one of motivation or effort, but rather of poor job design, failure of leadership, or unclear purpose” (54).

A study performed elsewhere in the public service sector noted that poorly structured organisations with complex internal processes can severely hamper service delivery. Typically these organisations are characterised by high wastage and large numbers of error-ridden processes. Where organisations have been able to successfully restructure and modify processes, wastage has decreased and errors have been reduced, simultaneously leading to improved service delivery.

It is generally accepted that for every rand not spent in preventing typical service delivery errors or problems, organisations can expect to spend at least ten times more - and as much as 100 times more - fixing things after an error occurs.
For example, readers will be aware that once a problem has occurred and a citizen takes the initiative to write a letter to their MEC or senior manager in a national department, and/or involves the media, the costs of addressing and responding to the problem can be alarming and escalate dramatically.

Cost = Production Costs + Prevention Costs
+ Inspection & Correction Costs
+ Field Problem Costs
  • People
  • Materials
  • Overheads
  • Etc.
  • Training
  • System
  • Improvements
  • Gathering & using citizen feedback
  • Fixing errors BEFORE they reach citizens
  • Fixing errors AFTER they reach citizens
    “The Good”
“The Bad”
“The Ugly”
      30-50% of costs in a typical service delivery organisation
Figure 1: The costs associated with quality service delivery

Figure 1 illustrates the costs associated with service delivery, and demonstrates how these costs increase exponentially the longer it takes to deal with the problem. The first two costs are unavoidable as they are incurred when the service is produced (i.e. production costs and the systems put in place to ensure effective and efficient delivery). If the systems in place (such as appropriate skills development, a CS survey and a functioning complaints process) are working to their full potential then any problems encountered will be dealt with promptly.

However, if the systems are not in place then other costs are incurred. The first are what we have referred to as inspection and correction costs. These are typically internal measures, which "inspect" service delivery to ensure that any problems that arise can be dealt with before they are actually delivered (e.g. reprinting a manual that is incorrect, but before it has been distributed). The second, which are typically very expensive to rectify, are those costs that a department will face if it has to rectify the service delivery after an unsuccessful attempt has been made to deliver it (e.g. recalling an incorrect manual or guideline after it has been delivered to all the staff of a department).


The most effective strategies to improve service delivery are those that involve staff in both the planning and implementation of service delivery improvement strategies. Any strategy to improve service delivery within a department will need to engage with employees of the department, by:

  • Revising the training of staff to incorporate service delivery improvement initiatives
  • Understanding and being sensitive to the impact of necessary changes and, or restructuring of the organisation on staff and
  • Incorporating service delivery standards into individual performance agreements and/or service level agreements.

Communicating the service standards

A crucial component of creating an environment within which quality service delivery will flourish is to strengthen those who benefit from the delivery of services by a department. However, to communicate the information, both internally and externally, gathered by the CS survey requires a robust communication strategy. It is crucial that the communication strategy provides an appropriate feedback loop that will ensure that information is fed both upwards within a department to allow managers to make informed decisions, and downwards to keep citizens informed about the initiatives a department is embarking upon as part of its SDIP. Components of a communication strategy would include:

  • Citizen education - i.e. make sure that staff educate citizens about what services a department offers, thus ensuring that citizens use the services in a manner that the departments have deemed appropriate.
  • Provide citizens with reliable and up-to-date information on quality that meets their needs.
  • Provide citizens with information pertaining to the services a department has targeted for improvement.
  • Create effective information dissemination mechanisms that ensure citizens receive important information about each department.
  • Provide assistance to those citizens who require help in making informed decisions about the type of service they would like to access.
  • Identify opportunities for citizens to be involved in the governance and oversight of services delivered by each department.
  • Inform citizens whether the service charter of a department will be updated based on the results of the survey.
  • Commission research on promoting effective use of information by citizens.

Monitoring delivery

Monitoring delivery should be a core component of every department's performance monitoring system. This is done by repeating the survey cycle once necessary changes have been implemented and continuing to do so on a regular basis and smaller diagnostic studies as required. By comparing the original or baseline measures with newer information, a department can track the success or failure of its service delivery improvement efforts.

Citizen ratings of overall satisfaction with performance can be measured through a series of surveys to evaluate the success of the SDIP process over time. As Figure 2 illustrates the gap between the performance rating and expectations should diminish as the department identifies and resolves service delivery gaps. By comparing the baseline measures with newer information, the department can track the success of its efforts and continually adjust to the changing needs of its citizens. Figure 2 shows how the gap between performance rating and expectations of the citizens should diminish as the SDIP is implemented.


This article has demonstrated how information gathered by a reliable and valid CS survey can be used by a department to shape and implement a SDIP. By carrying out these steps departments will ensure that a CS survey plays an important role in fostering a service delivery culture throughout the public service.


Department of Health. September 2000, A Policy on Quality in Health Care for South Africa. 2nd Draft. National Department of Health, Pretoria.

Dutka, A. 1994. AMA Handbook for Customer Satisfaction. NTC Publishing Group: Illinois.

Health Services Research Group, 1992. A guide to direct measures of patient satisfaction in clinical practice. Canadian Medical Association Journal, 146(10), 1727-1731.

Public Services Commission Report: Survey of Compliance with the Batho Pele Policy, 2000.

Public Services Commission Report: Monitoring and Evaluation Scoping Project, February 2001.

White Paper on Transforming Public Service Delivery, No. 1459 of 1997.



I'm sorry, could you please repeat that?

IS&T partner David Everatt bemoans the fate of naming. “What's in a name?” - well, there have probably been some good jokes made about Bill Shakespeare's own name over time, but we suspect that Strategy & Tactics has been more sinned against than most.

When we set up S&T, we deliberately chose a name that - we thought - resonated with the struggle against apartheid, which alluded to quality research allowing both strategic and tactical considerations, and which was just plain smart. And with brand recognition created by Marx, Lenin, Mao, Che, the ANC and others.

Or so we thought.

Our first government tender led to us being telephoned by a worried official, asking why we were claiming such high 'S and Ts' - in government-speak, S&T stands for subsistence and travel! Where we thought that the notion of strategy and tactics would have wide currency, given its historical roots in the Marxist lexicon among others, we were proved wrong, time and time again. But in struggling with our name, people have shown real creativity - with spelling and words - and who knows, we might yet change, to one of the names that we’ve been called.

Strategy and Testes               

Strategy and Phindi




Strategy and Stastics


Strategy & Tact                                     Statistics and Tactics                       


Strateeeegy and Tactics                                    Strateey & Tactics


Strtgy and Tactiks            

Subsistence and travel             




HIV/AIDS and the church

HIV/AIDS and the church imageThe Interfaith Community Development Association (ICDA) together with Strategy & Tactics (S&T) recently completed a study to understand the role of the church with regards to HIV/AIDS for the Department of Social Development in the Gauteng Province.

The study saw ICDA & S&T identify the spatial location of churches around Gauteng, then document the services provided by the church and local organisations with regards to HIV/ AIDS, and finally interview select respondents from a sample of the churches. The interviews focussed on four topics, namely:

  • the number of congregations or denominations,
  • the HIV/AIDS programme that they are conducting,
  • funding, and
  • problems encountered.

For the first step we targeted churches under the three groupings, namely mainline, charismatic and independent. We conducted ten face-to-face interviews at eight different churches with individuals from the three groupings. Although the decision was to speak to deacons we found that some of the churches have assigned a specific person who deals with their HIV/AIDS programme and in that case we spoke to that particular person. A further 15 telephone interviews were conducted telephonically.

The eight churches we visited are all running some form of HIV/AIDS programme. The most common programmes were:

  • HIV/AIDS awareness programmes,
  • Voluntary counselling,
  • Home based care,
  • Support children orphaned by AIDS, and
  • Child nutrition programmes.

Some of the churches mentioned that they were also running Hospice Wards within some areas of Gauteng.
Respondents mentioned that the most successful programme was home-based care and lack of funding was their biggest challenge. Some churches indicated that they received regular funding from external sources, but others were funding their programmes from their own coffers. All the respondents said they were disappointed because the government was doing very little to support their efforts.

Churches mentioned that if funding was available they would focus more on campaigns such as:

  • Dissemination of information about HIV/AIDS and basic education about HIV/AIDS,
  • Project management and education,
  • Providing video courses to all local denominations,
  • Training,
  • Identifying families that are vulnerable and assisting families to draw state grants,
  • Mobilising congregations and congregants, and
  • Basic education about HIV/AIDS support.

Most programmes run by the churches are targeting women and youth infected and affected by HIV/AIDS. Other churches indicated that they also had outreach programmes which were extended to members of the community (for example, where they identified community members who are in need). If churches had sufficient funds they would encourage home-based care on a much larger scale to allow them to target these two groups.


S&T surveying individual giving for Centre for Civil Society

S&T surveying individual giving for Centre for Civil Society imageS&T was invited to submit a tender for a large, complex applied research project by the Centre for Civil Society (CCS), run by Professor Adam Habib at the University of Natal. We were extremely pleased to win the job, and David Everatt has joined the project team, which includes Deborah Ewing, Steven Friedman, Mandla Seloane, Mark Swilling, Brij Maharaj, and Annsilla Nyar, as the project manager. The focus of the project is on resources that are available and those that can be mobilised in future around poverty and development.

The overall project is enormous, with components looking at:

  • Corporate social investment
  • Overseas development assistance
  • State spending
  • Religious groups, trusts and other organised formations, and
  • Individual-level giving.

S&T is responsible for qualitative and quantitative research focusing on the behaviour of individuals. This began with a national set of focus groups, among people of all races and classes, to get them talking about the poor, who they are, how to help them, what causes people support - and which they do not - and why this is the case.

The focus groups, conducted during May and June 2003, provided some fascinating insights - and complexities. For some people, giving includes helping relatives and extended family members, which for others is a responsibility or duty, not a voluntary activity. Some participants never give to certain groups - such as street children or beggars - while others consistently do so. One respondent, pressed about why he gives to the poor, answered: "Because we're all going to die, and I'll have to answer to God for my life!" This pointed to a deeper methodological challenge, namely the fact that many people want to give more than they do and it is vital not to push them into a defensive posture.

The focus groups were vital in providing insights into the way in which people think about giving; how they talk about it; and thus how to design survey questions that will help us better understand the issue. The results fed into a lengthy and complex survey design process, which is currently being piloted. Once the results of the pilot phase have been studied, the survey will go into field, using a representative national sample of 3 600 respondents. The data should be available in late 2003 or early 2004.
The aim of the survey is twofold: (a) to generate defensible quantitative data about the level and type of resources mobilised by South African citizens for poverty and development, and (b) a detailed analysis of the factors that facilitate and/or restrict giving.

The data will allow us to develop South African models of giving that are located in our different community and other formations, and which will be more useful guides to giving behaviour in this country than the vast American and European


In praise of middle managers

In praise of middle managers imageMatthew Smith, in conjunction with Jon File (CHEPS) and Marc Vermulen (Tilburg Management School), is currently facilitating a programme to empower middle managers at Technikon Northern Gauteng (TNG). The aim of the programme is twofold; in the short term we hope to assist the managers deal with TNG's forthcoming merger with Pretoria Technikon and Technikon North West, and in the long term the aim is to help make the managers more effective and efficient.

The programme consists of five one-week modules, which involve a wide-variety of teaching and learning techniques (e.g. local experts, videos, case-studies, interactive DVDs), with strong emphasis on participant interaction. The modules include a mixture of theory (e.g. current thinking on change management) and practice (e.g. how to run an effective meeting based on the John Cleese video series: "Meetings Bloody Meetings" and "More Bloody Meetings").

The topics covered in each module are grouped round a particular theme that whilst specific to higher education, draws heavily on current thinking in the business world. In the introduction module, for instance, we explore the key characteristics of a university as an organisation and what it means that institutions of higher education typically have ambiguous goals, rely heavily on problematic technology, have disputed notions of knowledge and are made up on the whole by loosely coupled departments and business units. Added to this a higher education institution relies heavily on middle managers to manage the organisation, which requires the middle manager to play conflicting roles ranging from what Quy Nguyen Huy refers to as "the Entrepreneur and the Communicator to the Therapist and the Tightrope Artist".

Building on this introduction, the second module focuses on governance and leadership issues in higher education. For example, in this module the participants explored the notion of change management within the context of challenges facing higher education such as economic globalisation, knowledge based economies, the need for mass higher education provision and changes in the nature of government co-ordination.

In module three, devoted to human resource issues, the participants discussed and debated issues such as:

  • What is human capital,
  • How to develop an HR strategy or HR model for an organisation,
  • The key factors to developing capacity within an organisation,
  • Different types of compensation systems used to provide incentives for an institution's workforce, and
  • How to motivate one's staff.

Module four examined the planning and financial management within the TNG context, and thus participants begun with the fundamentals of financial management, then moved on to the essential components of a business plan (i.e. mission, objectives, strategy etc.) and the associated budget and budgeting techniques. In addition the group also focussed on financial reporting (e.g. timeliness, accuracy and comprehension) and monitoring. In particular the participants identified relevant financial and non-financial indicators for their department/business unit.

The final module, the Capstone module, saw the participants, working in groups, applying what they had learnt in the previous four modules to a complex case study in which they had to develop solutions to a series of challenges facing an institution. The challenges included a) reduction in government subsidy, b) increasing staff teaching loads, and c) opportunities to increase consulting hours.



S&T appointed ANC research provider for 2004

Since S&T was formed in 1998, we have worked closely with the African National Congress(ANC), the ruling party of South Africa. We have provided quantitative and qualitative research services to the ANC for the 1999 general and 2000 local elections. We also trained ANC staff from the provinces in research methodology. S&T was invited to submit a research tender and earlier this year we were extremely proud to be appointed as the ANC's qualitative research provider for the 2004 general election. At the same time, we won the tender to conduct a baseline survey for the ANC in Gauteng.

S&T contracted by the Anglo Platinum Group

S&T, in partnership with Green Chilli Concepts (a group of independent researchers based at Wits University), has been contracted by the Anglo Platinum Group and Stakeholders (which includes all trade unions with members) to undertake qualitative and quantitative research into working shift arrangements.

S&T participates in Salzburg Seminar

David Everatt has been awarded a fellowship to participate in the prestigious Salzburg Seminar in October 2003. The focus of the seminar this year will be on engaging youth in community development.

[top] [to newsletters] [Previous page]
the team