the team
cape town

This article is taken from the April 2000 Phatlalatsa newsletter


The status of monitoring

S&T partner Moagi Ntsime is involved in monitoring analysis for the Consolidated Municipal Infrastructure Programme, the Community Based Public Works Programme and others. In this article he takes a look at the status - or lack of it - given to monitoring in development programmes.

Why the need for monitoring

Most government and development agencies spend a substantial percentage of their budget on development and poverty alleviation programmes among the target groups. As part of their advocacy strategy, policy-makers and politicians praise their strategies when making speeches about the manner in which they are committed to improving the lives of the ordinary citizens. What matters however is being able to accurately account for the extent to which budgets reached the target groups and made a positive impact. For this to happen, the need for a functioning monitoring strategy and system remains central in all respects. Monitoring refers to the systematic observation and documentation of information on the implementation of a project, based on the project plan. It comprises regular and on-going collection, analysis and reporting on data relating to progress and expenditure. Data may be qualitative and/or quantitative, but must be designed to measure specific indicators. Monitoring is distinct from evaluation of a project by external agents. Monitoring is regular and is an internal management function. Its output should allow key players at all levels – from project to programme – to assess their progress, identify problems, workshop solutions, and improve efficiency. An evaluation is normally fairly broad in scope, and provides a broad judgement on progress. It is rarely located at project level, and results rarely reach managers or PIAs in time to influence implementation.

Monitoring anti-poverty programmes

Monitoring a poverty alleviation programme usually requires a range of methods. These might include the use of accessible questionnaires to collect basic data, participative techniques, qualitative instruments, and so on. The system usually has to work on three levels.

Monitoring the performance of the project, includes observing, measuring and recording

The development of project resources in comparison with what was scheduled in the plan of operations/work plan and related personnel, input and budget plans as well as the accounting system. How timely, competently and efficiently the project activities have been executed, and what milestones have been achieved compared to what was scheduled in the Plan of Operation. The outputs/results achieved in comparison to what is specified in the indicators established and agreed to.

Monitoring the framework, which includes observing, measuring and recording

    • Changes in the assumptions in relation with specified indicators or minimum thresholds, which need to be monitored to ensure project success.
    • Unforeseen side effects of project interventions with special emphasis on negative side effects.

Monitoring the impact of the project, by observing measuring and recording

  • The extent to which the various target groups are utilising the goods and services delivered by the project compared to what was estimated in the indicators.
  • The change in practices and capacities of the various target groups, resulting from their use of the goods and services offered by the project.

The benefits which have accrued to the target groups as a result of the changes stimulated by the project

Not all the information required within the project will be met at this level or purely by monitoring. Therefore, it is important to have evaluations to provide the information requirements not met within a monitoring system.

Institutional lethargy

The recent scandal of non-spent anti-poverty funds by the Department of Welfare should have emphasised to all the importance of monitoring. It is notable that S&T designed a monitoring system for the Department of Welfare and Independent Development Trust, which included data collection mechanisms, Key Performance Indicators and impact analysis. Unfortunately we were not asked to implement the system. The Department of Welfare is not alone. We frequently find that monitoring strategies are far behind the implementation schedule with the result that they are put in place when projects are close to completion. Alternatively, the system in place was designed by people who do not understand the development dynamics facing the programme on the ground. Most importantly, one finds discrepancies between commitment and the actual delivery of the system especially when it comes to resourcing such a strategy. People want monitoring: but they don’t want to pay for it. In many instances, monitoring only gets thought about when there are problems. These might be institutional arrangement problems with the implementing agents, or political tensions in a particular community regarding project targets set by national personnel, with the result that there is little social impact recorded at project level. The problem with not having a monitoring budget item is that at the point when funds to perform the task are needed, they have to be harnessed from somewhere else to enable project officials to understand the problem, measure its prevalence and develop solutions. Mobilising funds takes time and effort - and many problems slip through because the effort of dealing with them is greater than the reward appears to be.

Setting up a separate monitoring unit

Some programmes do not have a unit or function dedicated to monitoring. In others, a small unit gets established, often long after implementation begins. In some instances, however, the notion of setting up a monitoring unit and/or function is perceived by policy makers as a waste of time and resources. Despite the emphasis of the Mbeki government on delivery - impossible to measure and judge without fully functioning M&E units - some officials persist with this view. Generally, when monitoring is seen as unimportant, programme bureaucrats adopt an ad hoc strategy to manage and monitor the programme performance and progress. This unsystematic approach leads to uneven results.

Ownership of the system

One major challenge for monitoring programme performance is to ensure that the monitoring system is owned ALL stakeholders. In reality this means that the system is used by all those involved at different level of project implementation to ensure that the programme achieved its objectives. Monitoring must not be seen by project officers as a tool to identify their non-performance. As soon as this is the case, the system stops working for the benefit of those targeted (the poorest of the poor): instead, it becomes a weapon in a battle between management and staff. The issues of the system being owned by all as a tool to assist everybody to manage the programme better continues to be one of the challenges that project bureaucrats need to handle well.

Relying on a sophisticated computerised system

There is a tendency for persons to assume that having a very sophisticated computer package means having the best monitoring system in place. Monitoring is not a computer package. It is unfortunate that there is a growing emphasis on buying "non-negotiable" and costly computer packages in the name of monitoring development impact. The fact remains, however glossy the package: if rubbish goes into a monitoring system, rubbish will come out. The quality of design at data-capture level, and the dedication of staff in accurately completing the data-capture sheets, are the key determinants of success.

Cumbersome reports

The result of relying on cumbersome computerised systems is that the reporting requirements become cumbersome, with very little policy meaning. Reports typically comprise pages and pages of graphs and/or tables - but with no textual analysis and no policy recommendations, no executive summary - and no chance that busy Ministers, DGs or others will even try to find their way through it.

Data integrity

Whatever the system, the issue of data integrity remains critical. This means that the process of data mobility from project to the highest level needs to function efficiently and reliably. The critical factor is whether the data being analysed are accurate or not. The relationship between data integrity and ‘ownership’ by all participants is fundamental.

Embarrassed by your project?

It is important to understand that the purpose of monitoring is to assist project implementers understand what is happening. In other words, is the project achieving its objectives - and if not, how to put in place an alternative strategy and "steer" the project in the desired direction. What happens in some instances is that policy makers get embarrassed when the project is not performing in accordance with set objectives. As a result, they do not want to hear about new challenges facing them. Being defensive when projects do not perform as expected is natural: but if the monitoring system is properly designed, it will include problem identification, analysis and remedy mechanisms.

Monitoring is fundamentally about transparency and flexibility. Monitoring systems have to give the good as well as the bad news. A good system will help solve the problem before it becomes widespread or capable of subverting the programme. Education about monitoring is clearly still needed, in government and civil society sectors.


[top] [to ] [Previous page]
the team