Key Evaluation Observations and Challenges

/file/evaluationpng-0_enevaluation.png

 

What can we keep in mind to improve evaluations?

Evaluating RDPs in the programming period 2014-2020 has transformed and in order to fulfil the objectives laid out in the legislations and complete robust assessments of RDP results in 2017 and impacts in 2019 and the ex post, Member States need to take into account the specificities of the new programming period and apply sound methodologies. This can seem like a daunting task for those who may be less familiar with evaluation, however, by keeping just a few basic observations in mind and preparing for a few common challenges the evaluation process can by easily aided. 

Below are the key suggestions and challenges unique for this programming period that one should keep in mind in order to achieve better evaluations.

  • Complementing the evaluation elements for each RDP: Flexibility in programming has several implications on evaluation. Measures are no longer attributed to one specific ‘axis’ as in the past but can now be programmed under different Union priorities/focus areas and programme-specific objectives. This flexibility, however, requires that the monitoring and evaluation system be adapted specifically to each RDP. In order to capture the full effects of a given RDP, the common system must be complemented with programme-specific elements (evaluation questions and indicators).

 

  • Building up the evidence basis for robust evaluations: Member States need to decide which data is required in order to capture real results and future impacts of the programmes. This data must include baseline values of indicators and should ideally be in line with selected methods. Data needed to answer the evaluation questions should be identified early on and specified in tender specifications for evaluators. Existing data sources need to be identified and assessed for their suitability in each RDP evaluation.

 

  • Assessing the net effects: Only net values of result and impact indicators show the real contribution of RDPs to changes observed in programme areas and targeted rural sectors. This calls for the application of advanced evaluation methods. In 2017, the result indicators should have been calculated in the form of gross and/or net values. In 2019 and the ex post, the net values of all result and impact indicators should also be provided. It is important to procure sufficient data on beneficiaries and non-beneficiaries in the system. Only under these conditions will it be possible to compare control groups and determine the values of result indicators necessary to answer the evaluation questions related to focus areas and other aspects.

 

  • Assessment of secondary contributions: It is important to demonstrate the full extent of what rural development policy achieves. Therefore, the quantification and assessment of indicators will be based on both primary and secondary contributions of completed operations. Secondary contributions are additional contributions of operations to focus areas other than those to which they are primarily attributed. The legal framework asks for a flagging of the intended secondary contributions when the programme is designed. The validity of flagging might be revisited during the preparation of the evaluation and corrected if necessary. Sampling will allow for the estimation of additional contributions of operations to focus areas.

 

  • Reporting on evaluation: The evaluation reporting will be done through an AIR SFC template for each of the evaluation questions. The template necessitates not only a clear statement, but also the values from which the statement is derived. The AIR SFC template allows space for programme-specific evaluation questions.

 

  • Quantification of indicators in case of low or no programme uptake: Due to the late start of RDPs, some Member States may be faced with too few completed operations to assess result and impact indicators. If this is the case, common and programme specific result and impact indicators should still be calculated for those RDP measures and focus areas showing sufficient programme uptake. For measures with low uptake, it is necessary to take into consideration any information available on potential beneficiaries and to justify why result and impact indicators could not be calculated as required. In the case of no uptake, methods based on the theory of change or qualitative assessments can be used to attain evidence on the potential RDP achievements. Previous evaluations and studies may also provide useful sources of information.

 

  • Proportionality in the assessment of programme results and impacts: Regardless of the size of the RDP, the effects have to be assessed.

 

Find more information on what to keep in mind when conducting RDP evaluations in the Helpdesk’s eLibrary.