Logframes can be useful In the last edition of the newsletter we discussed the basics of Project
Cycle Management, and explained that the central feature of this approach
is the logframe. In many instances logframes are constructed by officials
who feel they are only doing this work because the funder insists on a logframe,
and so they grow to resent anything to do with logframes. Anything imposed
from outside, without being properly explained and integrated, will appear
burdensome. Logframes, because of their technical language, are easily hated.
In this article we outline the useful role a logframe can play in the design
of a monitoring and evaluation system for one’s project or programme.
Levels of monitoring
The primary purpose of an M&E system is to enable managers of the
various projects and implementing agencies to monitor progress and performance
(and in turn react to the information). At a higher level in the management
hierarchy, and over the longer term, the effect and impact of the programme
on beneficiaries can be evaluated. For both monitoring and evaluation, reporting
the results in accessible format to all role-players is a critical challenge,
as is communication with the general public and target groups. Central to
any M&E system is the participation and contribution of stakeholders
in the design.
Steps in designing a monitoring and evaluation framework include: assessing
the institutional arrangements and organisational structures in the programme
and information needs at each level; using the logframe matrix as a project
planning, monitoring and evaluation design tool; and identifying key indicators
with which to measure progress.
Having established the key organisational arrangements for the programme,
the next stage in the design of the monitoring and evaluation system itself
is the analysis of the objectives, purpose, results and activities of the
programme using the logframe.
The logframe identifies the overall objective and purpose of the programme.
Moreover, it explains how the programme will meet the objective and purpose
through the implementation of projects (usually at the results level). Moreover,
each result/ project has a unique set of activities that will ensure that
the results are delivered.
For the implementation of a programme there are three logframe levels.
However, for each level (row) within the logframe there are different types
of indicators. Those for the activity and results level primarily measure
progress in programme implementation. Indicators at purpose and overall
objectives level measure effect and impact of the programme.
The relationship between the levels of the logframe and the type of indicator
and information contained in the indicator is defined below for a programme
The selection of indicators is the next key step in the design of the monitoring
and evaluation system. It is important in the design of the M&E programme
that the indicators (the Objectively Verifiable Indicators or OVIs) identified
are appropriate to the monitoring of programme and project delivery. Experience
has shown that poorly specified OVIs are a major weakness of M&E systems.
A second weakness to guard against is the listing of too many OVIs. Project
managers will have to ask themselves whether they need the information specified
in OVIs to manage the programme and make decisions. If they do not need
the information for this purpose, they should leave the OVI out and use
it for ad hoc purposes.
A critical step in the design of a monitoring and evaluation system lies
in ‘operationalising’ the indicators that are to be monitored. In other
words, translating policy themes into specifications about what data should
be collected, how it would be gathered, and how it would be analysed.
The success of a monitoring system lies in the availability of accurate,
timeous data. This is often achieved through the design of a simple system.
Some key performance indicators or OVIs will have to be excluded if the
system if it is to be cost-effective and manageale. There must be a deliberate
and purposeful decision to monitor performance regarding certain information
and not all. Key decisions revolve around what data needs to be collected,
what variables need to be assessed, how data needs to be analysed, how often,
for what purpose and to whom the data should be reported.
Good development can't happen quicklyThe non-spending of poverty funds by the Department of Welfare has rightfully become an embarrassing scandal for government. Typically, however, the search is on for whose head will roll. But the problem is deeper that that. The question is: does government understand development?
Take your time
‘Good development can’t happen quickly’. That sentence will be found in every development text book and is the experience of every development practitioner and evaluator. The reasons? Development projects have to be designed, budgeted, piloted, receive official support and then specific budget allocations. With government departments suffering from a lack of capacity, this can take years not months. Then, specific beneficiary communities have to be identified, projects negotiated with them and re-designed if necessary, and goals agreed to - such as the employment of women, participatory management, and so on. Local stakeholders - chiefs, NGOs, political parties, and many more - have to be consulted and brought on board. Add more months for this. Finally, tenders have to be issued - for technical consultants, social facilitators, programme implementing agents and so on. Add yet more months, and then more to resolve the squabbling that explodes when people who lose tenders try to use political influence to get decisions reversed.
The poverty auction
But then we face the crunch problem: the Department of Finance, not known for its developmental sympathies, treats the annual anti-poverty fund as if it were the national lottery. Departments have very little time to submit proposals for funds. The Department takes months to decide who gets how much - and then announces the ‘winners’ with great fanfare. What criteria are used - beyond economic sustainability, scarcely a hallmark of most versions of welfare - no-one beyond Trevor & Maria know. The Finance Department is also a lot quieter about the fact that if monies are not spent - entirely - within the financial year they are clawed back. It is in this context that the Department of Welfare’s failure should be analysed.
We have had the privilege of evaluating both the Transitional National Development Trust (TNDT) and the first R50m of anti-poverty money spent by the Department of Welfare in the 1997/8 financial year. Both cases highlight the problem of funding development year-on-year instead of changing to funding on a developmental basis.
Corruption and venality
Corrupt officials do exist. Corrupt private sector companies tender for and win government contracts. NGOs are not blameless either: many ‘eat money’ while others simply lack the capacity to deliver. By the same token, Ministers love press headlines. When we reviewed the Department of Welfare’s anti-poverty spending, departmental officials complained about the Minister telling the press that money would be spent on this or that project by such and such a date - regardless of whether the department was in any shape to live up to these commitments. Money was ‘dumped’ on projects so that commitments could be met. This problem doesn’t just affect the department of welfare. Unsurprisingly, some projects are now receiving funds from a range of government departments for the same activity. All of these are real problems. But they compound - they do not cause - the systemic problems in development funding.
Timing is everything
The issue of timeframes operates on two different levels. The first is the length of time taken for the funding agency or department to become operational and respond to needs coming from grass-roots? After the TNDT was established in April 1996, much of the first year was spent developing funding criteria and systems to respond to the enormous number of applications for funding. The Department of Welfare experienced similar problems in their first disbursement as there was no developed policy with respect to poverty alleviation and social development. Both were widely criticised for taking too long - but what alternatives did they have?u Secondly, the most important issue is how long the agency has to spend its money. The TNDT, as an interim funding mechanism, could only allocate funds to organisations for a maximum of twelve months. This severely limited the ability of the TNDT to make a serious developmental impact on the organisations and the communities they served. The extremely tight timeframes forced onto the Department of Welfare by the Department of Finance also compromised development processes. A large number of the projects funded by the TNDT and Welfare needed serious capacity building if they were to become sustainable. However, the bulk of capacity building focused on financial management - important, but not the only aspect of sustainability. The sustainability of projects is an expressed goals of many funders. Both evaluations show that community participation, capacity building and financial viability are all needed to increase the sustainability of a project.
To achieve these goals needs time, commitment and appropriate skills on the part of the funder. Where Department of Welfare staff are incompetent or corrupt, their heads should roll. But if we want real development then we need a funding cycle that does not contradict development itself.
The status of monitoringS&T partner Moagi Ntsime is involved in monitoring analysis for the
Consolidated Municipal Infrastructure Programme, the Community Based Public
Works Programme and others. In this article he takes a look at the status
- or lack of it - given to monitoring in development programmes.
Why the need for monitoring
Most government and development agencies spend a substantial percentage of their budget on development and poverty alleviation programmes among the target groups. As part of their advocacy strategy, policy-makers and politicians praise their strategies when making speeches about the manner in which they are committed to improving the lives of the ordinary citizens. What matters however is being able to accurately account for the extent to which budgets reached the target groups and made a positive impact. For this to happen, the need for a functioning monitoring strategy and system remains central in all respects. Monitoring refers to the systematic observation and documentation of information on the implementation of a project, based on the project plan. It comprises regular and on-going collection, analysis and reporting on data relating to progress and expenditure. Data may be qualitative and/or quantitative, but must be designed to measure specific indicators. Monitoring is distinct from evaluation of a project by external agents. Monitoring is regular and is an internal management function. Its output should allow key players at all levels – from project to programme – to assess their progress, identify problems, workshop solutions, and improve efficiency. An evaluation is normally fairly broad in scope, and provides a broad judgement on progress. It is rarely located at project level, and results rarely reach managers or PIAs in time to influence implementation.
Monitoring anti-poverty programmes
Monitoring a poverty alleviation programme usually requires a range of methods. These might include the use of accessible questionnaires to collect basic data, participative techniques, qualitative instruments, and so on. The system usually has to work on three levels.
Monitoring the performance of the project, includes observing, measuring and recording
The development of project resources in comparison with what was scheduled in the plan of operations/work plan and related personnel, input and budget plans as well as the accounting system. How timely, competently and efficiently the project activities have been executed, and what milestones have been achieved compared to what was scheduled in the Plan of Operation. The outputs/results achieved in comparison to what is specified in the indicators established and agreed to.
Monitoring the framework, which includes observing, measuring and recording
- Changes in the assumptions in relation with specified indicators or minimum thresholds, which need to be monitored to ensure project success.
- Unforeseen side effects of project interventions with special emphasis on negative side effects.
Monitoring the impact of the project, by observing measuring and recording
- The extent to which the various target groups are utilising the goods and services delivered by the project compared to what was estimated in the indicators.
- The change in practices and capacities of the various target groups, resulting from their use of the goods and services offered by the project.
The benefits which have accrued to the target groups as a result of the changes stimulated by the project
Not all the information required within the project will be met at this level or purely by monitoring. Therefore, it is important to have evaluations to provide the information requirements not met within a monitoring system.
The recent scandal of non-spent anti-poverty funds by the Department of Welfare should have emphasised to all the importance of monitoring. It is notable that S&T designed a monitoring system for the Department of Welfare and Independent Development Trust, which included data collection mechanisms, Key Performance Indicators and impact analysis. Unfortunately we were not asked to implement the system. The Department of Welfare is not alone. We frequently find that monitoring strategies are far behind the implementation schedule with the result that they are put in place when projects are close to completion. Alternatively, the system in place was designed by people who do not understand the development dynamics facing the programme on the ground. Most importantly, one finds discrepancies between commitment and the actual delivery of the system especially when it comes to resourcing such a strategy. People want monitoring: but they don’t want to pay for it. In many instances, monitoring only gets thought about when there are problems. These might be institutional arrangement problems with the implementing agents, or political tensions in a particular community regarding project targets set by national personnel, with the result that there is little social impact recorded at project level. The problem with not having a monitoring budget item is that at the point when funds to perform the task are needed, they have to be harnessed from somewhere else to enable project officials to understand the problem, measure its prevalence and develop solutions. Mobilising funds takes time and effort - and many problems slip through because the effort of dealing with them is greater than the reward appears to be.
Setting up a separate monitoring unit
Some programmes do not have a unit or function dedicated to monitoring. In others, a small unit gets established, often long after implementation begins. In some instances, however, the notion of setting up a monitoring unit and/or function is perceived by policy makers as a waste of time and resources. Despite the emphasis of the Mbeki government on delivery - impossible to measure and judge without fully functioning M&E units - some officials persist with this view. Generally, when monitoring is seen as unimportant, programme bureaucrats adopt an ad hoc strategy to manage and monitor the programme performance and progress. This unsystematic approach leads to uneven results.
Ownership of the system
One major challenge for monitoring programme performance is to ensure that the monitoring system is owned ALL stakeholders. In reality this means that the system is used by all those involved at different level of project implementation to ensure that the programme achieved its objectives. Monitoring must not be seen by project officers as a tool to identify their non-performance. As soon as this is the case, the system stops working for the benefit of those targeted (the poorest of the poor): instead, it becomes a weapon in a battle between management and staff. The issues of the system being owned by all as a tool to assist everybody to manage the programme better continues to be one of the challenges that project bureaucrats need to handle well.
Relying on a sophisticated computerised system
There is a tendency for persons to assume that having a very sophisticated computer package means having the best monitoring system in place. Monitoring is not a computer package. It is unfortunate that there is a growing emphasis on buying "non-negotiable" and costly computer packages in the name of monitoring development impact. The fact remains, however glossy the package: if rubbish goes into a monitoring system, rubbish will come out. The quality of design at data-capture level, and the dedication of staff in accurately completing the data-capture sheets, are the key determinants of success.
The result of relying on cumbersome computerised systems is that the reporting requirements become cumbersome, with very little policy meaning. Reports typically comprise pages and pages of graphs and/or tables - but with no textual analysis and no policy recommendations, no executive summary - and no chance that busy Ministers, DGs or others will even try to find their way through it.
Whatever the system, the issue of data integrity remains critical. This means that the process of data mobility from project to the highest level needs to function efficiently and reliably. The critical factor is whether the data being analysed are accurate or not. The relationship between data integrity and ‘ownership’ by all participants is fundamental.
Embarrassed by your project?
It is important to understand that the purpose of monitoring is to assist project implementers understand what is happening. In other words, is the project achieving its objectives - and if not, how to put in place an alternative strategy and "steer" the project in the desired direction. What happens in some instances is that policy makers get embarrassed when the project is not performing in accordance with set objectives. As a result, they do not want to hear about new challenges facing them. Being defensive when projects do not perform as expected is natural: but if the monitoring system is properly designed, it will include problem identification, analysis and remedy mechanisms.
Monitoring is fundamentally about transparency and flexibility. Monitoring systems have to give the good as well as the bad news. A good system will help solve the problem before it becomes widespread or capable of subverting the programme. Education about monitoring is clearly still needed, in government and civil society sectors.
Learning and teaching in AfricaOur mission statement at S&T is to use our skills to put Africa first.
In recent weeks, we’ve had the privilege of being invited to work in Nigeria
and Kenya, offering us the chance to be true to our mission.
Training in public participation in Abuja, Nigeria
The change to democracy in Nigeria brought with a constitutional review,
following similar undertakings in Kenya, Eritrea, Zimbabwe and of course
South Africa. David, who was the official evaluator of the public participation
process for the South African Constitutional Assembly, was invited to participate
in a civil society training workshop held in Abuja.
Under the auspices of the Centre for Democracy and Development, an NGO
that stretches across Nigeria and Ghana, civil society structures in Nigeria
were brought together to analyse their own strategy and to learn from our
experience in South Africa.
The South African team initially included Hassen Ebrahim, Deputy Director-General
of the Department of justice and former CEO of the Constitutional Assembly,
but he had to withdraw. John Tsalamandris, who had managed the Bill of Rights
committee in South Africa, went with David.
A skewed process?
The President of Nigeria has appointed a constitutional review committee,
with a mandate to travel across the country and solicit inputs, and to report
back in 3 months. Bearing in mind that there are 120 million Nigerians –
with over 250 different ethnic groups and languages – clearly the process
is inadequate, when judged against what took place in South Africa.
The civil society structures heard about the South African experience,
our strengths and weaknesses, based on the successive evaluations that David
had managed. They were also trained in conflict management, negotiation
techniques and a host of other important issues by John.
The African experience
We forget that we had a unique opportunity in the mid-1990s, to draw up
a genuinely democratic and participative constitution. We had a government
that wanted one; donors that put money into the process; and a powerful
set of civil society structures ready to participate, and fight hard for
what they believed in.
In Nigeria – as elsewhere – the situation is different. No real mechanisms
exist for broad-based public participation, which is left entirely to CBOs
and NGOs. This is as true in Nigeria as it is in Kenya.
But if there is little up-front space for mobilisation, the experiences
of countries other than South Africa shows that a critical mechanism for
legitimacy is a post-drafting referendum. The referendum in Zimbabwe is
the best-known example of this, and shows how a process that is skewed to
favour the ruling elite can be thrown out by the citizenry.
And it is it this kind of mechanism that the civil society structures in
Nigeria are now calling for. A referendum opens space for debate and mobilisation,
and for resources to be made available by wary donors. And – critically
– it is a process that can confer or remove legitimacy, with important and
far-reaching implications for ordinary citizens.
After days of intensive work in the 41 degree heat of Abuja, we felt exhilarated
and exhausted. Civil society is alive and well and growing in Nigeria, and
the more we can learn from each other, the more a common experience can
be analysed and understood.
HIV/AIDSSurveys that we have conducted over the last decade have found increasing numbers of South Africans whose lives are being directly touched by HIV/AIDS.
Ja - nee
Surveys typically throw up a profile of awareness coupled with denial. The typical response to questions about the virus is that HIV/AI
DS poses a serious threat to South African society. However, when the questions focus closer to home, diminishing proportions of South Africans have traditionally thought that HIV/AIDS poses a threat to their community or to them personally. Very few people ever admitted to knowing anyone in their community who is HIV positive.
However, the days of denial are running out. HIV/AIDS has become a reality for many people in the country, and is tragically set to become so for a great many more.
Awareness of AIDS-related illness and death
A recent random sample survey that we conducted in 3 District Councils in KwaZulu-Natal for the Department of Public Works produced shocking results. The survey concentrated on predominantly rural residents and found that one out of every two respondents had heard of someone suffering from full-blown AIDS in their community.
A second question asked whether respondents had heard of anyone who had died of AIDS in their community. Many people try to hide the HIV+ status of those infected with the virus. In addition, deaths are often attributed to the presenting disease such as TB or pneumonia. In other words, there is considerable scope for denial about HIV/AIDS. In this context it was tragic that one in every two respondents - 51% - told us they knew of someone who had died of AIDS in their community.
Regional spread or awareness?
It would appear that KwaZulu-Natal is at the heart of the HIV/AIDS epidemic. A similar survey in the Eastern Cape found far lower proportions of residents who knew HIV/AIDS sufferers or victims. But is this because of the geographic spread of the HI virus, or because KwaZulu-Natal is at the heart of much HIV/AIDS work including awareness raising?
The integration of HIV/AIDS concerns into government development programmes is of vital importance. The epidemic has taken root in rural areas, with far higher proportions of rural residents having heard of AIDS sufferers in their community compared with their urban counterparts. It may be that people with full-blown AIDS are sent to older relatives in rural areas to take care of them. This needs further investigation: if true, it will place greater stress on the already stretched rural infrastructure and household resources.
It is vital that the impact of HIV/AIDS on rural communities is carefully mapped and models for future impact are developed.
With the government about to embark on a sustained rural development strategy, it is vital that we better understand the likely impact of HIV/AIDS on social structures, the economy and so on.
This is particularly true of rural areas, where the household structure is both different from urban areas and already overstretched by poverty. When the costs - financial, emotional and other - of home-based care for AIDS sufferers is added, then we have to wonder what measures can be taken to shore up the household against collapse.
From the sidelineHansie and the boys have just won an historic test series in India, then lost the one-day series against the same opponents, then went on to lose the triangular series in Sharjah. Now the team is embroiled in allegations of match-fixing!
While Makhaya Ntini was an important part of the side in Sharjah, the team’s composition left one wondering how the development and transformation of the game was proceeding at the lower levels.
Over a year ago, the United Cricket Board launched its Transformation Charter - a broad-based plan to address the inequities of the past – and appointed a monitoring committee to oversee the entire process. While this was commendable, very little has been heard of the work of this committee or how the sporting code of cricket is coming to grips with transformation and development.
In September last year, Minister Balfour alluded to the introduction of performance contracts for each and every sporting code. These contracts were then to form the cornerstone of the process of transformation in sport. Again, very little has been heard of this since.
Since the 1995 Rugby World Cup, there have been repeated efforts and veiled threats to hold the sport administrators accountable for transformation of their respective codes. To give these efforts and threats any substance, one needs to have detailed information about the state of play in each of the codes. At what stage is development? What resources are needed and where? How are different individuals feeling and thinking about the development?
Regular monitoring and evaluation would provide this information. It is only then, with detailed and nuanced information at one’s fingertips, that one can more objectively view the transformation process in its entirety rather than solely reacting to the composition of our national cricket team.
Mzungu Lekgoa’s blueprint for this monitoring and evaluation includes:
- A baseline measure to establish the current state of affairs.
- A needs assessment to see what resources are needed where as well as to establish the Key Performance Indicators which are to be periodically tracked and against which the transformation can be assessed.
- A monitoring and evaluation system that regularly assesses the state of the transformation programme and reflects on the progress, blockages and challenges that transformation throws up.
We are in the midst of exciting and challenging Times New Roman regarding the transformation and development of our society and sport is no exception. It is imperative, however, that this transformation and development is documented not only for the present but also the future. And for a small fee, Mzungu can design a system for you…
Monitoring and evaluation: the Community Based Public Works ProgrammeElma Scheepers, the Deputy Director for Monitoring and Evaluation in the National Department of Public Works, shares her views on the role and status of M&E in the Community Based Public Works Programme
The Need for an Effective Monitoring and Evaluation System
The 1996 and 1997 evaluations of the Community Based Public Works Programme
by the Community Agency for Social Enquiry and the International Labour
Organisation identified problems with the verification of data. It was therefore
strongly recommended that a more effective monitoring and evaluation (M&E)
system be instituted. Since 1997 the Department of Public Works has successfully
developed an improved Programme Management System and associated M&E
The objective of M&E is to produce high quality information to enable
improved decision-making. M&E consists of two related parts, namely
monitoring and evaluation. According to Ticehurst (1995) monitoring is the
ion and management of data, which relates to a predefined target value
for objectively verifiable indicators. With monitoring the primary concern
is related to the planned objective. The recording of what has been accomplished
is achieved by result-orientated monitoring.
The results enable an organisation to compare what is being achieved with
the outcome envisaged in planning. If progress is checked at intervals and
compared with the plan, the result is a route description. If there is a
significant difference between the results and planned objectives, the organisation
will need to take corrective actions. In this regard, it is important that
the reasons for the deviation are well understood.
Evaluation is the analysis and periodic use of data to assess programme
(and project) viability, performance progress and its impact (Ticehurst
1995). Evaluation implies a judgement and this is achieved by the analysis
of monitored indicators measured against set goals and objectives.
Why is M&E Needed?
M&E are required for the improvement of a product or service and for
accountability to stakeholders. A core system is needed to ensure that the
programme’s norms, standards and policy objectives are met. The advantage
of using an M&E system is that it is a warning system capable of saving
time and money.
Key elements for effective M&E include the use of external moderators
(evaluators) for objective perspectives: their results must be action orientated
so that decision-makers are appropriately informed. The principles of accountability
and transparency are important in intervention performance. The main purpose
of public sector reforms should be to make government delivery more efficient,
transparent and accountable. A programme with an explicit government intervention
such as the Community Based Public Works Programme must adhere to these
principles when assessing the impact of their intervention, in relation
to the goals of the programme.
A critical element of evaluation is that it not only measures results,
but also helps in the understanding of cause and effect relationships. It
plays a crucial role in reforming the public sector in terms of enabling
effective policy formulation, planning and implementation.
M&E in the Community Based Public Works Programme
The primary purpose is to provide a reflective mechanism, which enables
the programme’s decision-makers to critically examine policy, implementation
strategies and progress. It is also important to continually develop a culture
of M&E within the organisation. The commitment of the programme’s management
has realised the integration of an M&E system into the programme. The
attention paid to continually improving the system is further evidence of
The M&E system consists of a Programme Management System (a manual
on how to implement and monitor) with a computerised Integrated Monitoring
Management and Information System for monitoring purposes. The evaluation
of the programme is done on an annual basis by external evaluators. Periodic
ad hoc diagnostic studies are also done. More specific M&E information
of the programme will be provided in future publications of this journal.
Programme Definition Mission for Danida, Kenya David Everatt has been appointed Team Leader for a Programme Definition Mission in Kenya, working for the Royal Danish Embassy. The project - like our work for DFID in Kenya last year - is being managed by South Consulting.
The Danish Embassy fielded a Formulation Mission last year, in order to
identify strategic priorities for their next funding phase (stretching to
2002). David has been brought in – with three powerful Kenyan colleagues
– to provide the Embassy with immediately implementable results.
David will take responsibility for defining the scale and functions of
a Programme Advisory Support Unity for the Embassy, which is one of the
largest donors in the country and a heavy supported of the relatively young
Kenyan NGO community but severely understaffed. David is also defining key
spending areas in the civic education and/or constitutional review process.
In 1999, David helped define a ‘basket fund’ to which all major donors could
contribute and from which the five large NGO/CBO consortia could receive
funds. Since then, the constitutional review has been taken over by parliament,
civil society excluded, and civic education is a more appropriate description
of activities than constitutional education.
Wachira Maina, a leading Kenyan consultant, has joined the team, to look
at areas of governance and institutional strengthening. Wambui Kiai will
be looking at the prospects for rural media. Karuti Kanyinga, a leading
academic commentator on civil society in Kenya, will be tackling the area
of NGO and CBO funding.
The whole team is trying to nudge the Embassy towards the clustering approach
commonly used in South Africa – where multiple assets are provided to key
development nodes – but taking the model further to include a local vernacular
newsletter, that can carry project news as well as HIV/AIDS messages, gender
messages, and so on. We will also be trying to find ways of linking the
heavily Nairobi-centric NGO community with the rural development projects
that Danida supports, as well as their surrounding communities.
If we can manage this, then Danida will be mainstreaming human rights and
good governance issues in a very concrete way. Currently, ‘mainstreaming’
is widely spoken about but rarely extends beyond some training seminars
for extension officers and advisors. We hope to make it real: via local
newsletters, which in turn require para-journalists, production staff and
so on; via capacity building for the community committees that run projects;
and by capacity building for local government officials.
After an initial, frantic week in Nairobi, getting the project in shape, David
will go back in early May to undertake fieldwork as well as work closely with
the local consultants to finalise their reports.
Access, selection and admission to Higher Education: Maximising the use of the school-leaving examinationMatthew Smith, in conjunction with Nan Yeld and Peter Dawes at the University of Cape Town, recently published an article in the South African Journal of Higher Education (vol.13 no. 3, 97-104) based on their ongoing research to find ways to increase equitable access to institutions of higher education in South Africa.
The article discusses a particular component of this research which aims
to find a reliable means of selection to Higher Education (HE) that (i)
maximises the use of the school-leaving examination rather than undermines
it, (ii) effectively widens access for historically disadvantaged students,
and (iii) is cost-effective and logistically manageable.
There are many reasons for the study. Essentially, they can be grouped
into three categories: the need to increase the participation rates of black
South Africans in HE, which of course includes technikons as well as universities;
changes in the school-leaving examination system; and the fact that, for
the majority of students in secondary schools, conditions remain much as
they were in the past (that is to say, schooling is still very unequal,
which means that black pupils in particular study under very unfavourable
and disadvantaging conditions, and are thus at a disadvantage when it comes
to selection for HE.
The Place-On- Examination Indicator (PoE)
It is the aim of this study to demonstrate how the school leaving examination
can yield a useful indicator that has until now not been used by HE institutions
in the selection of students. The procedure that is proposed uses the aggregate
school-leaving examination score obtained by each individual (the raw total
of all the marks for all her/his subjects) at a particular school to derive
a rank for that school, and assigns an indicator (expressed as a percentile)
to each individual which reveals her/his place on that rank (i.e. in that
class) for that exam. We have called this indicator the place-on-exam (PoE)
To calculate place-on-exam, the raw school-leaving examination aggregate
of every student in every school-leaving examination class in various examination
authorities, for the school-leaving examination year is calculated. Once
the data is sorted, the actual calculation of the PoE is relatively simple.
It involves sorting the matriculants by Exam Centre number and total marks
within exam centre, and then allocating a position number to them. From
there it is a simple matter to calculate the percentile representing the
position, based on the total number of candidates in the centre.
Such an indicator clearly has a number of advantages. First, it protects
individuals from being victims of circumstance, in that their performance
is assessed only in comparison with those who have had similar educational
opportunities. Second, it functions as an indicator of relative merit which
is independent of vagaries in the school-leaving examination system. A third
major advantage of the PoE indicator is the ease of its use by institutions.
The results of this study suggests the following:
- Calculating a place-on-exam indicator is worthwhile for each student,
as it allows the admissions officer to make better choices about students
especially border line candidates. To facilitate this process PoE should
be calculated centrally, before the results are distributed nationally.
- Students who are placed in the top decile of their matriculation class
are likely to succeed at university.
- The data collected in this study should continue to be updated to
establish the relationship between place-on-exam and graduation.
- Further investigation needs to be undertaken to establish how place-on-exam
can be used in combination with school-leaving examination results.