Home | Publications | Humanitarian Aid on the move | Humanitarian Aid on the move #2 | What is evaluation for? The missing links in the evolution of humanitarian actors.

The Groupe URD Review

Methods and tools

CHS Core Humanitarian Standard (CHS)
Pictogrammme Sigmah Sigmah Software
Pictogrammme Reaching Resilience

Reaching Resilience
Pictogrammme brochure Environnement Training
Pictogrammme brochure Participation Handbook
Pictogrammme COMPAS COMPAS Method
Pictogrammme globe terrestre The Quality Mission
Pictogrammme PRECIS Humatem PRECIS Method

What is evaluation for? The missing links in the evolution of humanitarian actors.
Hugues Maury

Gradually, humanitarian actors have come to accept the idea of having their work evaluated, with the aim of learning and improving practices. Unfortunately we do not always check whether this aim has been achieved. What are the “missing links” which need to be added in terms of practice so that the evaluation process delivers the desired results?

“Everything has been written, but not everything has been read” Prof. R. Perelman.


There is a general consensus today within the humanitarian sector that evaluations are important opportunities to learn and move forward rather than to appraise and sanction. In little over a decade, there has been a significant change in the practices of humanitarian operators and donors who now almost systematically include an evaluation during or at the end of a programme. Progress has, therefore, been made.

The ALNAP network has played an important role in bringing about this change by promoting the concept of evaluation, providing specialist knowledge and training and assessing the quality of evaluation reports through meta-evaluation. Donors (such as DG ECHO), institutions like HCR and World Vision (who pioneered the After Action Review) and various research and evaluation teams like Groupe URD have also played a part in developing new methods and approaches to evaluation which aim to meet needs on the ground more effectively. Groupe URD conducted the first real-time evaluation in 1998 following Hurricane Mitch, ran an ‘Observatory of practices’ in Afghanistan from 2002 to 2008 and is currently setting up another in Chad. However, even if there has been an increase in the number of evaluations carried out and approaches adopted, according to ALNAP the quality of these evaluations still often leaves a lot to be desired (see the meta-evaluations regularly published on the ALNAP website www.alnap.org). Even more worrying is that evaluation has not changed practices as much as was initially hoped by those who promoted its systematic use: “evaluation and the identification of lessons has not led to the system-wide improvements in performance anticipated in 1997…” [1].

One may therefore ask oneself how effective “evaluation” is as a tool. Do evaluations really help to learn and change? There is a risk that evaluations simply become a ritual, a question of regulations, imposed by donors or decision-makers of various kinds in the interests of transparency and political correctness, but without any real impact on humanitarian practice. What changes need to take place to make evaluations genuine tools for change? What are the missing links which would help to explain why evaluations have had such a limited impact on the service delivered to beneficiaries?

[1] SANDISON, Peta; ROBERT, Pierre; THE PERFORMANCE ASSESSMENT RESOURCE CENTRE; VALID INTERNATIONAL, Evaluation of the Department of International Development’s Support to The Active Learning Network for Accuntability and Performance, DFID, December 2004. 53 P. at http://www.dfid.gov.uk/aboutdfid/performance/files/aclearnnet.pdf