Decrease text Increase text

LEADER Monitoring and Evaluation

7. Tools and Methods

How much do LAGs need to know about evaluation tools and methods?

There are two main reasons why LAG members and staff need to know about evaluation tools and methods, their involvement in self evaluation activities and in the management of evaluations. These involve and require different degrees of knowledge and understanding.

In evaluating LEADER with its inherently participative methods and its strong socio-economic dimension, a participative evaluation approach is strongly recommended; this applies whether an external or self-evaluation approach is employed. The Rural Development Regulation (RDR) 2014- 2020 envisages stronger LAG involvement in evaluation activities. This is important in strengthening ownership of the process and outcomes and can be beneficial in supporting institutional learning. This in turn contributes to developing evidence-based policies and social accountability, enhancing understanding of the territory, the LDS and its effects across the population. This reflexivity is an essential component of the development of a mature LAG, hence the assertion here that evaluation is the eighth feature of LEADER.

Active participation in the evaluation process also strengthens its relevance along with the understanding and ownership of the outcomes. This in turn can strengthen the trust within the partnership and between the LAGs and MA. Participative approaches are also particularly relevant to the process elements of LEADER and its methodology e.g. in assessing aspects of its added value by comparison with other approaches.

It must be noted, however, that great care is required to avoid an overly strong focus on qualitative or methodological aspects of LEADER. This tendency has been prominent in the past but given the priority of demonstrating the added value of the LEADER approach the use of mixed-methods is likely to offer better and more robust results.

The following sections provide a brief overview of the main methods and tools, a simple description of the main evaluation tools and methods employed with links to practical examples and sources of guidance and advice on their use.

The examples highlighted can all be employed in both external evaluations and self-evaluation approaches, or in a mix of these approaches. It should be noted however that the design and use of even the most basic tools and methods is a skilled and specialist field, a little knowledge is a dangerous thing particularly for self-evaluation.

What is self-evaluation, how does it differ from other forms of evaluation?

The main difference is that Self-evaluation is when an organisation uses its own expertise to carry out evaluation; external evaluation is when an outside evaluator is brought in to carry out an evaluation for the organisation.

Who carries out the evaluation may partly depend on the resources available for it but in LEADER issues such as whether the LAG values also building its own analytical skills and internal capacity for reflection are important considerations.

Self-evaluation can be easier to integrate into the work of the LAG linking to the internal monitoring and reporting systems and tapping into internal knowledge. There may be greater ownership of the outcomes with a greater likelihood of change. Self-evaluation is not fundamentally different from any other form of evaluation however, it is not an easier option. The same principles and requirements apply e.g. in terms of the objectivity and rigour required, need for evidence, data requirements, level of participation and the planning and fit within the RDPs evaluation framework.

The important thing to note here is that in both cases specific expertise is needed. These two approaches can often be combined to good effect, key considerations are the costs and amount of time required and available in the LAG, the objectives and type of evaluation required and the skills required.

Qualitative, quantitative and mixed or triangulated approaches

The effects of LEADER, the method and the actions supported involve both quantitative and qualitative effects. Quantitative effects are changes which can be measured in straight forward numeric terms using quantifiable indicators; these are often derived from reporting or monitoring data, project or business records or through survey or sample.

Qualitative effects are more subject to judgement, either in their reporting or their evaluation. They reflect a change of state be that the condition, behaviour, satisfaction or performance, as such they require a judgement to be made on the extent of change. This involves the use of more sophisticated or specific tools often involving scaled responses measuring change against a baseline position or condition e.g. a Likert scale.

Frequently in LEADER evaluations these are used in combinations to address the need to capture different aspects of the effects of the approach and its objectives including quantitative, qualitative, procedural and relational issues. In doing so the different tools enable a process of triangulation, by comparing the different perspectives and results a higher degree of resolution and reliability of conclusion can be achieved.

Conventional tools

Desk research: This is the starting point for many evaluations and is used to analyse the basic contextual and performance data, this would normally include analysis of relevant contextual trends, the policy background etc. This involves the use of monitoring and reporting data and standard data sources e.g. on employment or business performance. Comparative analysis of performance against e.g. benchmarks or control examples is also possible. Desk research is often used to inform the design of the consultative elements of an evaluation.

Interviews: These are normally conducted by telephone or face to face with key informants e.g. LAG members, community representatives, statutory organisations, NGOs, MAs etc. The questionnaires used can be adjusted in depth as required and to the consultee, they can cover both qualitative and quantitative elements, these are normally the most in depth elements of the consultative approach.

Surveys: Surveys can be conducted using a variety of mediums, they normally involve a questionnaire which can be distributed by post, by email, on line, by phone or face to face, be self-completed or involve a surveyor. These can either be statistically representative or not, sample sizes for robust statistical reliability are unlikely to be achieved in a single LAG evaluation due to the relatively small number of beneficiaries. They can be used for both qualitative and quantitative purposes. They are generally less reliable than analysis of monitoring and reporting data or census approaches. Costs vary by the size of sample and the length and complexity of questionnaire, whether self-completion or not, postal or on line, qualitative approaches are more expensive. On line tools such as Survey Monkey enable direct analysis of the findings but require careful design.

Census: A census generally involves the use of a questionnaire with the entire population affected e.g. all beneficiaries under a support scheme. The reliability of census results is high, costs will vary by the size of the population and the length and complexity of questionnaire. Qualitative approaches are more expensive.

Focus groups: Focus groups are generally used to address a specific topic or question or a specific section of the population. They should involve a small number (5-10) of individuals balanced in age, gender, location etc. in exploring an issue in depth supported through impartial facilitation. Commonly this would be used to explore a specific evaluation theme or topic area or to check or validate initial findings. Costs vary by location, duration and the number of facilitators required, findings may be presented as a specific report, an annex or incorporated in the main report.

Case studies: Case studies vary enormously in the approaches adopted, their depth and scale, they can be both external examples for comparison or illustration or examples from within the subject of the evaluation to highlight specific issues or achievements. They may involve both desk research and consultative or primary research elements, they may be both qualitative and quantitative. Findings may be presented as a specific report, an annex or incorporated in the main report. Their costs are affected by a wide range of factor such as nature, scale, location etc.

More technical and sophisticated approaches employed

The Working Paper [PDF ] on Capturing impacts of Leader and of measures to improve Quality of Life in rural areas of the Evaluation Helpdesk1 developed by an expert working group is targeted primarily at practitioners as a resource to provide a core of practical methodological approaches and tools suggested for capturing the impact of LEADER and of the 2007 – 2013 RDPs Axis 3 measures to improve the Quality of Life in rural areas. These were based on state-of-the-art methodologies and current practices in Member States. Four impact categories were identified together with a set of evaluation questions and indicators, these are socio cultural, environment, rural economy and governance. The tools and methods suggested are designed to be sufficiently flexible to respond to the specificities of the programmes and programme areas where they will be applied. Many of the specific tools described below are included in this working paper which also includes more technical descriptions of the conventional tools listed above.

Self-Assessment This involves the assessment of the two governance features: decentralised management and financing and the local partnership which it says can justifiably be said to constitute the major building blocks of a sound LEADER evaluation; the other six LEADER features are thought to be principally consequences of governance processes. LAG should integrate self-assessment and self-reflection at the centre of their approach and this can be built into an on-going formative evaluation process. This should be organized as a continuous cycle of events periodically involving different actors at different times.

Most Significant Changes (MSC) Monitoring This assessment method is based on a narrative approach and operates without indicators, it can be integrated in participative and on-going self-evaluation or monitoring processes. Carried out through focus groups it prepares narratives or stories about important or significant changes giving a rich picture of the impact of development work and providing the basis for dialogue over key objectives and values of development programmes. MSC complements other methods but it comes into its own where outcomes are unexpected and meanings are disputed.

The Potential and Bottleneck Analysis (PBA) is based on the assumption that local and regional development efforts can be improved if qualitative and quantitative aspects are considered as interlinked and contributing to a comprehensive picture of the whole. The assessment focuses on the respective potentials and bottlenecks affecting local development in terms of eight key aspects. These aspects are assessed with approximately 90 specific questions during a workshop involving at least 30 participants using rating scales. After answering the 90 questions, the results can be visualized as cobweb profiles or bar graphs. Changes are expressed in terms of differentials from the baseline situation.

Plugging the leaks: The New Economics Foundation developed a simple-to-use approach to local economic development evaluation which uses the analogy of a leaky bucket to explain economic flows in a local economy. When money is spent in a local economy some flows out as people buy goods and services elsewhere. The more money that stays and re-circulates, the greater the retention of benefits will be in the local economy and the greater will be the likelihood that the re-circulating money will create more jobs. Economists tend to express this type of economic effect using the term ‘multipliers’ but, rather than constructing an elaborate economic model, a pared-down version using the same underlying principles can help throw light on changes in the local economy arising from project interventions. In order to evaluate impacts and outcomes, it should be used in a ‘before and after’ situation.
The great thing about this approach is that it can be used by the community itself to build up a picture of the flows in and around a local economy. It is a tool for both self-assessment and evaluation and external assessment.

Social Network Analysis helps to assess the density, quality and robustness of communication structures between partners in formal or informal networks and represents these results visually in a network diagram. The approach provides insights on bonding capital in a stakeholder network, on structural characteristics such as centrality or peripherality of specific actors, or on emerging sub-networks which are only loosely linked to other parts of the network, as well as on specific roles of actors within the network.

Social accounting is the process of collecting information about the activities an organisation carries out which affect its stakeholders. These activities may be intended ‘outputs’ or just the day-to-day internal operations. Organisations do not exist in a vacuum and the impact they have on their environment can be measured according to three dimensions: social, environmental and financial (hence the term “triple bottom-line accounting”). Financial reporting has been in use for hundreds of years and can be used to show both what has happened and as a planning tool. Social accounting enables this process to be carried out for social and environmental outputs.

Measuring the improvement in rural community capacity2. The Scottish Managing Authority included this additional indicator for LEADER in their 2007 to 2013 RDP. In order to measure the indicator their ongoing evaluators were asked to develop and pilot a measurement tool. This was developed from the suite of indicators and impact categories in the above working paper. The central principle was to identify a set of evaluation questions and align these with pieces of evidence which LAGs may already, or could readily collect to demonstrate their achievement. This included elements such as attendance at LAG decision making meetings, LAG representation on other bodies, participation of match funders by sector, No of projects completing project or financial management training successfully, number of spin off projects, organisations or networks initiated as a consequence of LEADER etc.

Social Return on Investment3 (SROI): The use of this approach in the assessment of the social and process related aspects of interventions is becoming more widespread and is increasingly being applied in LEADER. SROI recognises that there are many things we value that cannot be easily captured in traditional economic terms. As conventional cost-benefit analysis types of approach do not consider anything beyond simple costs and price alternative tools to measure social and environmental impacts were needed and developed. SROI is an analytical tool for measuring and accounting for a much broader concept of value, taking into account social, economic and environmental factors.

RUDI, the Rural Development Impacts4 project was supported under the 7th Framework Programme "European Knowledge Based Bio-Economy. The research aimed to enhance understanding of the social, institutional and capacity-building effects of the different components of rural development policies at both national and regional levels within the 2007 to 2013 programming period, and to identify factors that lead to failure, as well as to learn from best practice examples. The study includes a number of case studies of relevance to LEADER and includes references to specific evaluation methods which could have wider relevance.

1 Helpdesk of the European Evaluation Networks for Rural Development, The Working Paper on Capturing impacts of Leader and of measures to improve Quality of Life in rural areas (2010)
2 ENRD website, NSU Training programme [PDF ]
3 See for an explanation of the technique

The text on LEADER evaluation in the toolkit is a practical guide aimed at contributing to how all actors involved in LEADER could make LEADER evaluation more effective. It does not serve as guidance for formal RDP evaluation related to LEADER. For the latter please visit Evaluation Helpdesk resource page

  • Factsheet on social Return on Invesment tool [PDF ]
  • LEADER related evaluation on local level
    • Final Evaluation of interventions in the forestry sector in Cumbria funded through the Rural Development Programme for England (RDPE) via the Cumbria Fells and Dales and the Solway, Border and Eden Leader Groups [PDF ]
    • Bolsover North East Derbyshire LEADER, Programme Evaluation, 2014 [PDF ]
Last update: 19/06/2014 | Top