Welcome Nobi and MatthewS&T is proud to announce two new partners.
S&T is proud to announce two new partners. Nobayethi Dube (Nobi) was born in Soweto in 1968. She matriculated in 1986 at Emadwaleni High School. In 1987 she started working at an engineering company in Randburg. In 1991 Nobi joined the Community Agency for Social Enquiry (C A S E) a non-governmental organisation, as a receptionist. While working there, Nobi successfully completed her BA Admin, and an Honours degree in Industrial Psychology, graduating in May 1999. Nobi was successively promoted to Project Administrator, and finally to helping set up and run the fieldwork department.
Nobi is that rare researcher who can do every part of the job, from design to fieldwork to analysis. She has worked on issues including gender, disability, health, and public works. She brings a great depth of experience to S&T.
Welcome too to Matthew J. Smith. He has degrees from the University of Cape Town and Rhodes University, and is currently completing a Ph.D. at Syracuse University, New York. He has done extensive evaluation work on social programmes in South Africa and in the USA, where he spent three years on a Fulbright scholarship.
In South Africa, he has worked with education institutions, government departments and corporations. In the USA he worked with foundations and institutions of higher education.
Matthew has published articles in local and international journals. He was the lead researcher in the second Kaiser Health baseline survey, which is about to be released. Having worked in education, health, media and other fields, he brings powerful research skills and sectoral specialisation to S&T, and will be running our Cape Town operation.
Matthew runs the Cape Town office, with the help of Sihaam Gasnola. He divides his time between helping S&T on its national projects and doing work in the areas of health and education. He is currently working with the Centre for Higher Education Transformation (CHET) on their Transformation Indicator Project, which is examining whether institutions of higher education have transformed since South Africa's independence in 1994.
Matthew has recently completed a scan of the higher education environment in Mpumalanga for the Office of the Premier.
Matthew is also working with the Health Systems Trust's Initiative for Sub-District Support (ISDS). He is designing a client satisfaction index to measure clients' perceptions of the quality of service they receive at public hospitals in South Africa. As a member of the national task team to examine patients' perceptions of hospital quality, Matthew hopes to extend his research into hospitals across South Africa.
Monitoring anti-poverty programmes: the cornerstone of S&T S&T is involved in the design and implementation of monitoring systems for three key government anti-poverty programmes, below the challenges of social monitoring are explored.
S&T is involved in a wide range of activities, but designing efficient and
effective monitoring systems has emerged as our key activity for the next
couple of years. In the eight months that S&T has been in existence, we have
won contracts for monitoring the following anti-poverty programmes:
- National monitoring analysis for the Community Based Public Works Programme
- National monitoring analysis for the Consolidated Municipal Infrastructure
- Designing and implementing the monitoring system for the Department of
Welfare's 'War on Poverty' which is managed by the Independent Development
What is monitoring?
Monitoring plays an integral part in performance evaluation, establishing
in an ongoing way how projects are being implemented by the institution or
agency concerned. Although methods are similar, monitoring and evaluation
have very different functions.
Monitoring allows managers to determine project costs, the value projects
have added to recipients and participants, and the efficiency of the project.
Through a data gathering system, an essential component of monitoring, participants
have a tool with which to document the progress made towards achieving self-set
goals. When data analysis is workshopped with project managers and workers,
it allows for participative decision-making.
This is important, because some decisions are hard to take. For example,
following the 1997 evaluation of the CBPWP, the Department of Public Works
set quota ranges for the employment of women and youth. The monitoring system
will measure the extent to which these targets are being met. If they are
not, different recruitment methods may be needed. This is preferable to a
summative evaluation finding that too few women and youths had been employed,
once it is too late to do anything about it.
With a monitoring system providing on-going information, project workers
can be involved in discussion and decision-making, as the system shows them
what they and others like them are doing. The need to change direction comes
from their own assessment of whether their targets are being met.
In any large-scale programme, a key indicator for efficiency is the size
of the administrative overhead. Internationally, a figure of around 10% is
regarded as acceptable.
Achieving a balance between delivery and administrative costs is not a simple
matter. S&T has to balance effectiveness and cost when, for example, we are
asked to monitor not just employment and expenditure figures, but, as with
the 'War on Poverty", to monitor the impact of programmes. This requires
multiple methods, with the system having to absorb and synthesise qualitative
as well as quantitative data.
Anti-poverty programmes must be efficient in directing the majority of resources
directly to the targeted beneficiaries. But to do so the monitoring system
must be sufficient to measure whether intended beneficiaries are actual beneficiaries,
and inform management decisions.
Very few South African agencies have succeeded in designing monitoring systems
that are affordable, accurate, and useful to both workers and managers. S&T
is looking forward to pioneering cost-effective and replicable development
Designing a monitoring system for the IDT The Independent Development Trust (IDT) has been contracted by the Department of Welfare to manage its R203m Poverty Relief and Infrastructure Investment Fund. This, the second grant awarded to the Department, builds on a R50m grant received in the 1997/98 financial year. The IDT has commissioned S&T to undertake a review of the R50m grant; to design and pilot a monitoring instrument; and to design and help implement a monitoring system.
South Africa's welfare system before 1994, characterised by disparities
and inequality, focused on various specialities and the provision of rehabilitative
and institutional services.
The 1997 White Paper for Social Welfare set the scene for transforming welfare
services to an approach that would "lead to self-sufficiency and sustainability".
Through the current Poverty Relief and Infrastructure Investment Fund, the
Department is trying to position its programmes within a developmental approach
to welfare services.
Monitoring the R50m grant
There seems to be consensus amongst Departmental personnel that the monitoring
system for the first grant was inadequately conceptualised and designed, and
that monitoring of projects was inadequate. A private consultancy, appointed
to assist in the management of the programme, was briefed to collate the provincial
monitoring reports into an integrated monthly report, and visit and monitor
The actual monitoring process was left up to each province to develop, depending
on their capacity. Although a standardised reporting format was eventually
devised, a number of provinces in the interim developed their own format and
were reluctant to change.
In most provinces, projects were not visited on a regular basis and, in
some instances, were not visited at all. The reports submitted by funded projects
were extremely limited and focused purely on financial issues. A number of
projects did not submit reports for the entire funding period.
Reasons cited for the failure of the monitoring system included lack of
departmental capacity to manage the programme; the emphasis on financial aspects
of the projects; the inaccessibility of the monitoring instruments, and the
complexity of the task, given the wide range of projects.
Monitoring the R203m fund
Where does this leave the monitoring system for the R203m Fund? The Department
of Welfare still lacks the capacity to manage the day-to-day activities of
the programme efficiently and effectively. A new institutional arrangement
has been developed to try and address this.
Non-governmental organisations in each province have been contracted to
serve as cluster co-ordinators, responsible for a number of projects. Their
main tasks are to monitor the activities and progress of the projects and
help build grass-roots capacity. The IDT is expected to play a largely supervisory
role, ensuring that all actors are fulfilling their roles and responsibilities.
On the surface this appears to be an improved system that plugs many of
the gaps from the previous system. However, not all of the cluster co-ordinators
appear to have the capacity to administer the monitoring instruments on a
regular basis. There are also a number of projects for which cluster co-ordinators
have not been appointed. Furthermore, the capacity of provincial departments
to deliver regular feedback on cluster co-ordinators is questionable, given
Aside from the capacity of certain actors, there are additional questions
that can be raised about the envisaged system:
- Can projects be expected to provide honest feedback on the cluster co-ordinators
when it is the cluster co-ordinators themselves collecting the feedback
at project level?
- Will a cluster co-ordinator provide truthful information on a project
when their continued employment in the programme depends on the project
continuing to receive funding?
Implications for monitoring
For a well-designed and well-maintained monitoring system, considerable
resourcing is needed. But anti-poverty programmes cannot have high administrative
overheads: money is primarily aimed at helping those most in need.
The monitoring system that the Department of Welfare is looking to implement
is time-consuming and costly. The capacity-building of those on the ground
will need to be counter-balanced with a system that can still deliver if capacity
is lacking. We need to ensure that the time it takes to bring cluster co-ordinators
and provincial departments up to speed does not delay or compromise the collecting
of appropriate and useful data.
Monitoring does not equal evaluation The conflation of monitoring and evaluation is the cause of many problems in programme implementation. For best results, monitoring and evaluation must be understood as different processes, but treated in an integrated way. In many programme management manuals, government tenders and even methods text-books, monitoring and evaluation are treated as near-identical. They share a single budget line item, and little thought is given to the complexities of design and implementation. This gives rise to problems during the programme implementation phase.
Which is which?
Monitoring comprises the regular and on-going collection, analysis and reporting
on data relating to progress and expenditure. Data may be qualitative or quantitative,
but must be designed to measure specific indicators - usually called Key Performance
Indicators (KPIs) - and must be flexible enough to be synthesised with data
coming from other projects and other methods.
Monitoring anti-poverty programmes usually requires a mixed bag of research
methods. Accessible questionnaires are commonly used to collect basic data
(regarding performance, employment targets, and so on). Financial data are
also relatively easy to collect. But a monitoring system also needs to trigger
rapid result diagnostic studies which help explain why the overall pattern
looks the way it does, and - critically - try to suggest new ways of dealing
with an issue. For example, if monitoring data repeatedly show that women
are not being employed in sufficient numbers across the programme but are
employed in specific instances, a research team may be sent to analyse why
some projects are succeeding, which of their strategies are relevant, and
so on. In other words, a good monitoring system should identify both problem
Evaluations are specifically designed research interventions, which usually
aim to answer specific questions. Formative evaluations take place prior to
implementation, to assess the effectiveness of planning, targeting, resource
allocation and so on. Evaluations can also take place during the lifespan
of a programme, and summative evaluations can be used to provide an overall
judgement of the programme at it's completion.
Confusingly, evaluations often form key parts of monitoring systems. For
example, in our work for the Department of Public Works and the Department
of Welfare, diagnostic studies will be used. These are fast turn-around evaluations,
focusing on specific issues. The data from these studies is absorbed by the
monitoring system. They are a critical addition to the regular data coming
from questionnaires and elsewhere.
By the same token, the cumulative output of a monitoring system can form
a more or less (depending on the KPIs it measured) summative evaluation -
though it lacks the objectivity that can come from external evaluators.
What happens to the results?
An easier way of understanding the difference may be to assess what happens
to their output. A monitoring system provides management data. Information
must be regular and reliable, because it will be used to inform important
decisions - should a programme continue or change direction? Should resources
be shifted to areas that are doing well and particular project types scrapped?
These and many other questions can be assessed month by month using the output
of a good monitoring system, alongside other inputs.
Evaluations are more commonly used to measure the success of particular
strategies or projects. They more commonly inform policy and programme recommendations.
In part, this is a design fault: most South Africans insist on summative evaluations
only, and therefore lose out on the valuable insights that formative and other
forms of evaluation can offer.
Conflation and confusion
In our experience, monitoring is frequently regarded as little more than
a secretTimes New Roman new roman function which government departments assume
they can manage themselves: As a result, budgets are tiny or non-existent.
We have yet to find a situation where this has worked.
Alternatively, departments go 'high-tech': we have come across monitoring
system proposals that are based on projects dialing data into the system via
the internet, and downloading analysis the same way. A less realistic scenario
could scarcely be imagined, at least in the anti-poverty programmes we are
Finally, government departments assume that a software package designed
to monitor spending can, with a little tinkering, monitor employment patterns
and programme impact. Again, successful examples are hard to find.
If "M" and "E" are conflated, all forms of evaluation are also commonly
run together. Our most common experience is being brought on board a programme
as it begins winding down, to undertake a summative evaluation. S&T were not
present for critical discussions that led to critical choices being made.
We simply go in and measure.
The need for evaluators to be part of a programme as it unfolds is fundamental,
but rarely acknowledged.
S&T's solution is to provide monitoring and evaluation as a package. Our
monitoring systems are based on regular data complemented by highly qualified
evaluation teams undertaking targeted studies, aware of programme dynamics
and focused on finding solutions not merely making judgements. Because we
are part of programme management teams, we can help ensure that decisions
are based on data, and on reliable data. Finally, we are committed to the
seeing the programme through. We only regard ourselves as having done a good
job if the overall programme is judged to have been successful.
Reality or perception: What is life in South Africa really like? In April this year six major newspapers carried S&T's research into the homes of millions of South Africans. In-depth articles covering six days of print got the country debating the differences between the reality of life in South Africa and the way we perceive it.
Reality Check was a joint project of Independent Newspapers and the Henry
J. Kaiser Foundation. Its purpose was to take stock of South Africa's new
democracy from the perspective of its people. A survey questionnaire was developed
by S&T and administered to 3 000 households in November and December 1998.
The sample provides statistically valid findings for the South African population
as a whole, as well as for the different races and provinces.
Data analysis was led by David Everatt and Ross Jennings of S&T, and by
Mollyann Brodie, vice-president for Public Opinion Research at the Kaiser
Journalists from Independent Newspapers built on this analysis and wrote
a series of articles that were carried in the group's daily newspapers from
the 19th to the 23rd of April 1999. A supplement with all the results was
carried on 28th April 1999.
The Independent Newspapers' archives on their website have the articles
from Reality Check: www.inc.co.za/archives/1999/9904/21/keiser1904
A user-friendly M&E guide for Rand Water One of the key challenges facing those of us involved in applied research is to share the lessons of our experience with others. S&T has been commissioned to produce an accessible guide to both monitoring and evaluation.
Linda Meulman heads the Community Based project unit within Rand Water.
She has been developing programme and project management tools for her staff.
We have been commissioned to complete the monitoring and evaluation sections.
The end product must be simple and easy-to-use, as it is aimed at practitioners.
To produce a guide to monitoring and evaluation that is accessible and reliable,
S&T staff will have to identify, analyse and critique current literature in
both fields. This will be supplemented by examples of good practice. Entire
American journals are devoted to programme evaluation, and a massive body
of other published material on the subject also exists. Sifting through these
sources for high quality material will be a slow process, but invaluable for
The project will take some three months to complete.
S&T monitoring municipal infrastructure In early 1999, S&T tendered for the management of the Consolidated Municipal Infrastructure Programme (CMIP), alongside Epa, BKS and others. CMIP is one of the largest programmes undertaken by government. The programme aims to provide basic levels of services to low income households while also contributing to other government strategic and intervention policy objectives. We were very excited to win the tender.
The programme targets vulnerable communities and sectors of society like
women, youth and the disabled for job creation. The programme is being re-directed
towards meeting the developmental objectives of local government as expressed
in various policy documents.
The Department indicated that their experience in previous programmes was
not satisfactory, especially with regard to issues of development focus and
monitoring. This sentiment corroborated some of our experiences in setting
up monitoring systems for national Departments involved in similar programmes
to CMIP. Our experience of monitoring systems for anti-poverty programmes
has not been positive, as discussed on page one.
For national Departments to monitor critical indicators is not always an
easy exercise. In addition, it normally takes time to find a common understanding
and purpose for the team involved. Those involved in the system need to own
it, to ensure that they put more effort to make it work. The implementation
and support teams are central to the success or failure of the entire programme.
They need to understand the intended outcome of the system, and its importance.
In CMIP, speed of delivery was not the key criterion for judging success or
failure. Rather, poverty measurements are now used to assess whether the programme
achieved its objectives or not. According to the Chief Directorate in CMIP,
their previous programme experience was different. Implementing agencies were
insufficiently concerned with development objectives during implementation,
but focused on technical proficiency and speed of delivery. There was also
little emphasis on communicating progress to South Africans. Monitoring focused
on the progress of construction rather than on performance and impact regarding
key development indicators.
As such, there were fewer systems in place to assist with the attainment
of development objectives. To use the expression of a senior member of the
Department, "the previous programme was driven by engineers who concerned
themselves only with the laying down of pipes with no development dimension
New challenges for CMIP
The new programme is ambitious. That being the case, the programme must
keep all players in touch and working together. To directly address the legacies
of the past, it must develop people who will facilitate and monitor development.
The S&T monitoring system will be in place to assist the Department in assessing
whether the intended targets are being met or not. Community empowerment and
job creation, the empowerment of women and young people and similar Key Performance
Indicators (KPIs) will need to be monitored at all levels of the programme.
This calls for dedicated individuals to account on a regular basis as to
how and why certain targets were - or were not - not reached. This is always
the most difficult aspect of monitoring: to ensure that those who are involved
in the programme do not perceive the monitoring exercise as a "Big Brother"
situation. A sense of appreciation will have to be cultivated and inculcated
for those involved in the programme to accept the importance of the monitoring
exercise, for this to become part of managing the entire process of poverty
On the other hand, the Department will have to come to terms with the fact
that it cannot monitor everything. The system will have to be manageable and
simple. Some of the desired KPIs will have to be excluded if administrative
costs are to be kept in check.
As the new CMIP swings into operation, it will be equipped with a monitoring
system that should allow all those involved - from workers to government officials
- to know where they are, how they are performing, and what their weak points
are. Decisions should be easier to take, and based on data. S&T regards this
job as one of the most important we shall be involved in for the next couple
Public Works Programme monitored and evaluated Since 1994/95, the Department of Public Works has focused not simply on delivery, but on delivery with special concern for poor communities. Methods of uplifting poor communities have involved community empowerment, community participation in identifying projects, project management, poverty alleviation, and post-project involvement of local communities. During all these processes, important lessons were accumulated, and were used in re-aligning the Department's Community-Based Public Works Programme (CBPWP). The issue of monitoring emerged as central to the successful realisation of the other components of the CBPWP.
Lessons from the evaluation process
The 1997 evaluation of the CBPWP conducted by C A S E and the ILO (Moagi
Ntsime and David Everatt were key members of the team) noted that South Africa
has probably one of the best public works programme in the world. The report
further indicated that technical design standards and the quality of completed
physical infrastructure surpassed anything the ILO members of the evaluation
team had previously encountered. The programme nevertheless had shortcomings.
A key failing being the monitoring component of the programme.
The evaluation drew a distinction between monitoring progress and performance.
Progress monitoring involves keeping track of disbursements, work days created
and so on. Performance monitoring focuses on programme outputs and the overall
programme impact. Neither was adequately performed. The evaluation team found
that there was no reliable data to assist the team in drawing conclusions
about the performance of the programme, let alone to inform programme managers
as they went about their work.
In response to the evaluation, in early 1998 the Department formed a Pre-Implementation
Task Team (PITT) to assist in preparations for the next phase of programme
implementation. Moagi was in charge of monitoring and evaluation on the task
team; David was responsible for targeting, which he worked on with Ross Jennings.
According to the then Public Works Deputy Director-General, Lulu Gwagwa,
the evaluation team, while criticising the monitoring system, did not analyse
the nature of the problem in detail. This was Moagi's task on PITT. This started
with a detailed reconstruction of the existing data-path, from projects to
national office. What was found was common enough: provincial department staff,
already busy, were told to collate data from (erratic) project reports, tabulate
the data and send it to Pretoria. The result: weak data, which was mostly
unreliable. In many instances, the same data were reported month after month.
No one checked or verified anyone else's data. The system was chaotic.
The Department has worked hard to try to ensure that monitoring activities
constitute an integral part of the entire poverty alleviation programme during
and after its implementation. Moagi's report to PITT was accepted, and S&T
was given the task of analysing monitoring data for the current programme,
Anti-Poverty Programme-274 (APP-274). The programme has just started the implementation
phase, and we are facing a fascinating challenge, to make the system work
- not just to produce accurate data, but to do so in sufficient time for it
to be useful to programme managers.
S&T will be working on this project at least until the end of 1999.
I L O Tracer study The International Labour Organisation (ILO) and the Department of Public Works (DPW) under the Realigned Community Based Public Works Programme has approached S&T for a tracer study to assess the impact of the training provided by the DPW. In particular they want to know if it has an impact on the economic situation of trainees.
The aim of the programme is to enhance and optimize the employment generated
by the construction sector and to provide opportunities for previously disadvantaged
and emerging black contractors to take a fair part in the construction process.
The study will take place in three Provinces, and three components:
- 1. a historical study of approximately 15 people trained under the pilot
projects component of the National Public Works Programme;
- 2. a tracer study of approximately 60 people trained under the current
phase of the Community Based Public Works Programme; and
- 3. a tracer study of approximately 30 people trained in mainstream Department
of Public Works Programme.
The sample will also include a control group who work on projects, but receive
no formal training.
Aims of the study
Our first task will be to identify members of the community who will work
on projects and receive project-related training. Projects will also employ
members of the community who will not receive training. A comparison of these
two groups will be revisited throughout the study. Training is meant to enhance
the "economic viability" of trainees; our job is to find out if this is true,
and to what extent.
The second task would be to look at DPW trainees in mainstream projects.
The study will be conducted in a series of 3 visits. During the first visit
the research team is expected to look at the criteria for selecting trainees,
the depth of the training given and the response of the trainees to the training.
The research team will then identify 15 trainees who will be expected to diariase
their experiences after being trained. This will be followed up on subsequent
The second visit will take place approximately 3-4 months later. This visit
will look at the relevance and depth of the training provided. The visit will
particularly look at the patterns of behavior between the trained and non-trained
group. The third visit will be divided into a series of visits and will take
place approximately six to nine months after the second visit. It will look
at the experience trainees and workers have undergone through the entire project,
and in particular whether their economic situation has improved.
The tracer study is expected to run into the year 2000.
Baseline studiesS&T recently completed the 'Reality Check' study for the Kaiser Family Foundation
and the Independent Newspaper group. Our completion of Project Readiness Assessments
for all projects being funded by the Independent Development Trust on behalf
of the Department of Welfare comprises an organisational baseline. Our partners
have been involved in key baseline studies in the past, in the areas of youth,
health, gender and so on.
Baseline studies are critical for good monitoring systems, and for reliable
evaluations. Baseline studies - whatever methods are used - provide (as their
name suggests) a benchmark against which subsequent performance can be measured.
As a result, baseline studies are normally quantitative.
Baseline surveys in South Africa have had a profound effect on developing
our understanding of the socio-economic terrain. In all sectors of society,
baseline surveys have allowed us to develop a detailed picture of what is
happening. Moreover, because the baseline survey has given us a snapshot of
what is happening in those sectors, the snapshot can then be compared with
other snapshots taken at a later date.
In other words, the value of baseline surveys becomes more apparent over
time as they allow us to measure change since the first survey. The value
of measuring this change can be seen in the health sector, where the first
Kaiser Family Foundation baseline survey on health inequalities was performed
in 1994. A second survey was performed in 1998, which allowed researchers
to compare the effectiveness and quality of the delivery of health care in
1998 with that in 1994. This comparison allowed health practitioners and policy
makers to establish what had improved and what had not in the intervening
four years. This information will also assist strategic planners in developing
achievable goals, based on the comparisons made between the baseline study
and subsequent studies.
Performance planning can also not be done without a baseline study. The
baseline study provides the information that allows one to set the benchmarks
against which performance can be measured. Without the baseline study the
benchmarks are likely to be meaningless and not grounded in reality.
Read more about baselines, evaluation and related issues in our next newsletter.