home
the team
newsletters
about
cape town
stnews
downloads
links
 
   

This article is taken from the June 1999 Phatlalatsa newsletter

 

Monitoring does not equal evaluation

The conflation of monitoring and evaluation is the cause of many problems in programme implementation. For best results, monitoring and evaluation must be understood as different processes, but treated in an integrated way. In many programme management manuals, government tenders and even methods text-books, monitoring and evaluation are treated as near-identical. They share a single budget line item, and little thought is given to the complexities of design and implementation. This gives rise to problems during the programme implementation phase.

Which is which?

Monitoring

Monitoring comprises the regular and on-going collection, analysis and reporting on data relating to progress and expenditure. Data may be qualitative or quantitative, but must be designed to measure specific indicators - usually called Key Performance Indicators (KPIs) - and must be flexible enough to be synthesised with data coming from other projects and other methods.

Monitoring anti-poverty programmes usually requires a mixed bag of research methods. Accessible questionnaires are commonly used to collect basic data (regarding performance, employment targets, and so on). Financial data are also relatively easy to collect. But a monitoring system also needs to trigger rapid result diagnostic studies which help explain why the overall pattern looks the way it does, and - critically - try to suggest new ways of dealing with an issue. For example, if monitoring data repeatedly show that women are not being employed in sufficient numbers across the programme but are employed in specific instances, a research team may be sent to analyse why some projects are succeeding, which of their strategies are relevant, and so on. In other words, a good monitoring system should identify both problem and solution.

Evaluation

Evaluations are specifically designed research interventions, which usually aim to answer specific questions. Formative evaluations take place prior to implementation, to assess the effectiveness of planning, targeting, resource allocation and so on. Evaluations can also take place during the lifespan of a programme, and summative evaluations can be used to provide an overall judgement of the programme at it's completion.

Confusingly, evaluations often form key parts of monitoring systems. For example, in our work for the Department of Public Works and the Department of Welfare, diagnostic studies will be used. These are fast turn-around evaluations, focusing on specific issues. The data from these studies is absorbed by the monitoring system. They are a critical addition to the regular data coming from questionnaires and elsewhere.

By the same token, the cumulative output of a monitoring system can form a more or less (depending on the KPIs it measured) summative evaluation - though it lacks the objectivity that can come from external evaluators.

What happens to the results?

An easier way of understanding the difference may be to assess what happens to their output. A monitoring system provides management data. Information must be regular and reliable, because it will be used to inform important decisions - should a programme continue or change direction? Should resources be shifted to areas that are doing well and particular project types scrapped? These and many other questions can be assessed month by month using the output of a good monitoring system, alongside other inputs.

Evaluations are more commonly used to measure the success of particular strategies or projects. They more commonly inform policy and programme recommendations. In part, this is a design fault: most South Africans insist on summative evaluations only, and therefore lose out on the valuable insights that formative and other forms of evaluation can offer.

Conflation and confusion

In our experience, monitoring is frequently regarded as little more than a secretTimes New Roman new roman function which government departments assume they can manage themselves: As a result, budgets are tiny or non-existent. We have yet to find a situation where this has worked.

Alternatively, departments go 'high-tech': we have come across monitoring system proposals that are based on projects dialing data into the system via the internet, and downloading analysis the same way. A less realistic scenario could scarcely be imagined, at least in the anti-poverty programmes we are working on.

Finally, government departments assume that a software package designed to monitor spending can, with a little tinkering, monitor employment patterns and programme impact. Again, successful examples are hard to find.

If "M" and "E" are conflated, all forms of evaluation are also commonly run together. Our most common experience is being brought on board a programme as it begins winding down, to undertake a summative evaluation. S&T were not present for critical discussions that led to critical choices being made. We simply go in and measure.

The need for evaluators to be part of a programme as it unfolds is fundamental, but rarely acknowledged.

The solution?

S&T's solution is to provide monitoring and evaluation as a package. Our monitoring systems are based on regular data complemented by highly qualified evaluation teams undertaking targeted studies, aware of programme dynamics and focused on finding solutions not merely making judgements. Because we are part of programme management teams, we can help ensure that decisions are based on data, and on reliable data. Finally, we are committed to the seeing the programme through. We only regard ourselves as having done a good job if the overall programme is judged to have been successful.

 

[top] [to ] [Previous page]
     
home
the team
newsletters
about
contact
stnews
downloads
links