Coverage – the proportion of all people needing or eligible to receive a service who actually receive that service – is an important measure of a programme’s ability to meet need. Timely, appropriate and high-quality monitoring of coverage is therefore essential in planning, implementing and tracking the progress of programmes such as those for maternal, newborn and child health and nutrition1. However, large gaps still exist with regard to coverage data and assessment of these programmes1 2 3. For example, only 11 out of 75 countries that account for 95% of maternal and child deaths had data on all 8 coverage indicators that have been recommended for monitoring the progress of key interventions aimed at improving women’s and children’s health4. A similar scenario exists for nutrition programmes. Coverage data for the 12 key nutrition-specific interventions crucial for undernutrition reduction3 5 6 7 are scarce; only 3 of the 12 interventions have readily available, nationally-comparable data in a majority of countries, with the rest of the interventions having either no data at all, non-comparable data or only sub-national data4. Coverage data on interventions for the management of acute malnutrition (both severe and moderate acute malnutrition – SAM and MAM) falls into the latter category with data being very limited.

World Food Programme and coverage

The World Food Programme’s (WFP) Strategic Results Framework (SRF) outlines the organisation’s approach to “plan, measure and monitor, review, report and learn from results”8. As such, the SRF specifies WFP project outcomes and outputs and the indicators by which to measure their achievement. The current SRF for the period 2014 to 2017 reflects the organisation’s work to demonstrate the results of its efforts to shift from a food aid paradigm to food assistance, with projects that “prepare and respond to shocks, restore and rebuild lives and livelihoods, enhance nutrition coverage, reduce vulnerability and build lasting resilience”8. When comparing the current SRF to its predecessor, the most significant difference that can be noted is that MAM treatment coverage is an outcome indicator across three out of the four strategic objectives articulated in the current SRF8 9.

WFP’s shift towards monitoring and reporting coverage is in line with the current global emphasis on coverage described above. However, WFP has little experience in implementing the current methods available for coverage assessment. In addition, only a few examples exist where these coverage assessment methods have been used for measuring SFP coverage.

In line with the new nutrition measurement requirements outlined in the current SRF, OSZAN Nutrition Programme Unit is planning to conduct a pilot test of new coverage measurement techniques in selected countries. The pilots seek to investigate which of the currently available methodologies best suit WFP’s global MAM treatment operations to meet the SRF’s measurement requirements.

Due to the innovative nature of the coverage measurement techniques, there is a need to test the corporate feasibility of the various methods prior to implementation across WFP’s global MAM treatment programmes. WFP does not have the technical capability at present to implement the coverage measurement techniques and requires the assistance of technically sound expertise. Furthermore, while challenges in adapting the coverage measurement methodology to WFP’s scale of operations are expected, solving these challenges offers the opportunity to generate evidence about programming evaluation processes, using primary data. For these reasons, WFP is collaborating with Valid International to design and implement coverage measurement techniques suitable to WFP’s programmatic requirements.

A review of WFP’s “Measuring Nutrition Indicators in the Strategic Results Framework (2014-2017) Briefing Package” and “2014-107 Strategic Results Framework Indicator Compendium” along with the Strategic Results Framework for 2014-2017 was conducted to establish a baseline of WFP’s corporate understanding and approach on measurement of the new coverage indicator requirements of the SRF. The review highlighted some confusion and concern as to how and by what means the coverage indicators are to be assessed. Part of the aim of this project is to provide guidance and clarity to WFP’s HQ and country-level decision-makers on the methods available to them and the resources and capacities required for their implementation.

Project Objectives, Results and Outcomes

The principal project goals included both a country-specific objective to design and implement individual surveys, and a global objective using the combined lessons learned from these surveys to develop comprehensive recommendations for WFP’s future coverage work. These core objectives were as follows:

  • Design, train and implement MAM coverage survey in the selected areas in each of the four countries
  1. Choose the suitable coverage study design, within the budget available and existing contextual conditions
  2. Provide support to Country Offices to plan the details of the implementation
  3. On-the-job training of senior staff, local partners and field staff
  4. Supervise data collection quality during the in-country training and provide remote support
  5. Submit a report describing the coverage measurement outcomes
  • Provide guidance for WFP Headquarters and Regional Bureaus regarding the coverage methodology, including global lessons learned and recommendations for designing future surveys to assess coverage of MAM treatment programmes

The major outputs under these objectives include:

  • Country-Specific Survey Design Documents Coverage survey design and implementation was adapted to the needs and capacities of each WFP country office, including careful consideration of programme learning needs, admission criteria and case definitions, the coverage estimators used, and cost constraints as well as the spatial characteristics of the survey area (as described more in detail in the sections below).

  • Detailed Maps To facilitate the process, detailed reference maps of proposed survey areas were produced by Valid International staff, using R scripts to visualise relevant geospatial data (e.g. administrative boundaries, village locations). These maps were instrumental in designing the sampling approach.

  • Online Portal Additionally, a WFP-specific online portal was created, including guide and tutorial tools that describe the chosen sampling frames for all three countries. The portal also presents a “toolbox” explaining the different available coverage survey designs and the sampling and analysis methods used at each stage of assessment, as well as providing access to additional resources, software, etc. The portal can be viewed at A design document was submitted to each of the countries for final approval and to facilitate WFP country teams’ preparation for the survey.

  • Coverage Training and Implementation WFP country office staff were consulted very closely throughout the process, and specific needs and capacities were taken into consideration for each country to ensure the most appropriate coverage design. The CO staff were trained in the survey method chosen, using learning materials specifically designed for this purpose and via practical on-the-job training, as well as learning the background on the other survey methods. Valid International staff were present throughout the survey implementation, or for a large part of it taking into consideration the financial limitations and remote technical support was provided for the duration of each survey and throughout data analysis. The training element was monitored with both trainers and trainees reporting on the topics covered, to ensure comprehensiveness.

  • Country Reporting Individual reports were produced for the respective countries. Each report included the detailed country context, survey design, data analysis and results as well as discussion and recommendations along with the synthesis and lessons learned.

  • Global Guidance Document The final deliverable in the form of a global guidance document (this document) summarises key findings across all the deliverables, draws overarching conclusions and describes lessons learned. It also makes recommendations for next steps based on analysis of WFP’s needs, and thus can feed into future work on refining and developing data collection and monitoring tools.

  • Coverage Advisor A “coverage advisor” was also produced as a part of the deliverable package; its main purpose is to facilitate the task of choosing the right method to assess coverage in a particular program context. The tool asks a set of questions of the user who intends to conduct a coverage survey. These questions are based on the coverage methods selection algorithm developed by Valid. Based on the responses, the tool asks further questions to elicit as much information as possible from the user in order to fine-tune the advice on the most appropriate method to use. This is a work in progress and will be improved and tested by Valid staff for future programme coverage design.

Introduction to coverage assessment methods

Prior to 2002, coverage of nutrition programmes to treat acute malnutrition was typically assessed by first estimating the prevalence of acute malnutrition through a nutrition survey, which uses a standard two-stage cluster sampling methodology adapted from the World Health Organization (WHO) Expanded Programme on Immunization (EPI) coverage survey method10 11 12. Coverage was then estimated in one of two ways. Indirect estimation of coverage was done by dividing the number of admitted cases in the programme by the total number of expected cases calculated based on the prevalence estimates from the nutrition survey conducted and on target population counts13 14 15. Direct estimation, on the other hand, calculated the coverage based on the percentage of cases found in the nutrition survey who were enrolled in appropriate treatment programmes14 15.

Issues and limitations of previous methods

Both these methods suffer from critical limitations. The indirect method involves a great deal of uncertainty as it relies on further data on population estimates that have their own issues and limitations. This method tends to overestimate coverage13 15 16 if not providing totally improbable values17.

The direct nutrition survey-based method, on the other hand, has two major problems: 1) sample size; and, 2) assumption of homogeneity15. The sample size is usually too small to estimate coverage with reasonable precision. The sample size calculated for a nutrition survey allows the prevalence of acute malnutrition to be estimated with reasonable precision, but the sample size available to estimate coverage depends on the prevalence of acute malnutrition found by the survey. When the aim of the survey is to estimate the coverage of a feeding programme for severe acute malnutrition, the sample size will usually be too small to estimate coverage with reasonable precision. This problem may be less acute for estimating the coverage of treatment of MAM, because there will tend to be more moderately malnourished children in the population than severely malnourished children.

The other issue is that the direct estimation assumes that coverage of the feeding programme is homogeneous across the whole survey area, and therefore can only give an overall area estimate of coverage. In a small geographical area, such as a refugee camp or a single village, this assumption may be true. However, over a wider area this assumption is often unlikely to be true, especially for facility-based programmes, because coverage will tend to be greater in areas close to the facility.

However, these methods’ issues and limitations bore no real consequence during this period as coverage of these programmes was not given as much emphasis as the development of the most effective treatment regime18 19. Coverage was rarely reported, if at all measured, and estimates were generally low20 21 22 23.

Development of better methods

By 2002 onwards, coverage has taken greater prominence particularly in the evaluation of humanitarian interventions. For selective feeding programmes for MAM and SAM, the importance of coverage was further bolstered by two key events during this period: 1) the development of an alternative, community-based approach to the management of acute malnutrition or CMAM ; and, 2) the inclusion of coverage-specific Sphere standards for selective feeding programmes in humanitarian emergencies. These related events brought about a programmatic environment that required coverage assessment tools able to address the limitations of previous methods to guide the development of CMAM towards achieving sustained high levels of coverage.

The Centric Systematic Area Sampling (CSAS) method was specifically developed for this purpose and was used to assess the coverage of CMAM programmes for several years15 24 25. The CSAS method was replaced by the Simplified Lot Quality Assurance Sampling Evaluation of Access and Coverage (SLEAC), which is a lower-cost classification-based development of CSAS, and the Semi-Quantitative Evaluation of Access and Coverage (SQUEAC), a semi-quantitative approach concentrating on a detailed investigation of factors influencing coverage26 27. The adoption of the CMAM model at national levels has led to the need for methods that can provide information about coverage over wide areas. This need is being met by adaptations of the SLEAC method28 and also by the Simple Spatial Survey Method (S3M): an adaptation of the CSAS method but with improved spatial sampling and a more efficient use of data. A summary table of these methods can be found in Table 1.

Table 1: Coverage survey methods

Programme considerations CSAS SQUEAC SLEAC S3M
Size of programme (local, district, regional or national) Local area method for programme site catchment areas up to district level programmes Local area method for programme site catchment areas up to district level programmes Wide area method used to classify and map survey results of district level up to regional and national programmes Large-scale area sampling method used to estimate and map survey results of regional up to national programmes
Survey results reported (estimate or classification) Estimate of coverage Estimate or classification of coverage Classification of coverage for each service delivery unit with the possibility of reporting overall estimates depending on sample size reached and homogeneity of results Classification and estimate of coverage (small area up to overall)
Area level by which survey results are applicable (overall, service delivery units, catchment area of programme site) Local areas (grids on map) and overall for the district Catchment area of programme site and overall for the district Service delivery units and overall for the district, region or country Local areas (grids on map) and overall for the region or the country
Component methods
  • Area sampling methods using quadrats
  • Snowball sampling (active and adaptive case finding) and other high-sensitivity case-finding methods
  • Sample size calculation with finite population correction
  • Data mapping principles and methods
  • Data collection using simple tally sheets and questionnaires
  • Data analysis using simple estimators
  • Use of existing qualitative and quantitative data as part of the investigation process of indicator of interest
  • Mixed qualitative and quantitative approaches to data collection and analysis
  • Hypothesis-testing
  • Snowball sampling (active and adaptive case finding) and other high-sensitivity case-finding methods
  • Lot quality assurance sampling (LQAS) methods
  • Spatial mapping principles and methods
  • Bayesian analysis
  • Area sampling methods using either quadrats or systematic sampling using lists
  • Snowball sampling (active and adaptive case finding) and other high-sensitivity case-finding methods
  • Lot quality assurance sampling (LQAS) methods
  • Sample size calculations using hypergeometric probability distribution principles
  • Data mapping principles and methods
  • Data collection using simple tally sheets and questionnaires
  • Data analysis using simple estimators
  • Area sampling methods using triangles
  • Snowball sampling (active and adaptive case finding) and other high-sensitivity case-finding methods
  • Sample size calculation with finite population correction
  • Data mapping principles and methods
  • Data collection using simple tally sheets and questionnaires
  • Data analysis using simple estimators
Baseline information requirements
  1. Detailed map showing each program site and villages/locations is a must.
  2. Estimates of population size for all populations and 6-59 month age group of each catchment area of program site
  1. At least a complete list of villages/locations within each catchment area of program sites (ideally good detailed maps but optional)
  2. Routine program monitoring data
  3. Additional data from patient record cards
  1. At least a complete list of villages/locations within each service delivery unit (detailed maps optional)
  2. Rough estimates of population size (all populations and 6-59 month age group) of each service delivery unit
  3. Prevalence estimate (ideally estimate for each service delivery unit but aggregate figure acceptable)
  1. Detailed maps showing each service delivery unit and villages/locations are a must.
  2. Estimates of population size for all populations and 6-59 month age group of each service delivery unit
Expected Deliverables
  1. Estimate of coverage at level of local areas (grids on map) and overall for the district
  2. Mapping of coverage estimate at level of local areas (grids on map)
  3. List of barriers to coverage
  1. Classification or estimate of overall coverage
  2. List of boosters and barriers to coverage with detailed information on how they affect coverage
  1. Classification of coverage at level of service delivery unit and overall
  2. Mapping of classification of coverage at level of service delivery unit
  3. List barriers to coverage
  1. Estimate of coverage at level of local areas (grids on map) and overall
  2. Mapping of coverage estimate at level of local areas (grids on map)
  3. List of barriers to coverage

Increasing numbers of organisations have used one of these methods to assess the coverage of selective feeding programmes they implement or support. Recently, an inter-agency project called the Coverage Monitoring Network (CMN) was formed with the aim of increasing the capacity of nutrition programmes to design and implement coverage assessments. In support of this aim, CMN commissioned Epicentre in 2014 to organise an independent, participatory review of three of the coverage methods mentioned above, namely SQUEAC, SLEAC and S3M29, in order to help clarify confusion and address issues raised by the users of the methods. This process was able to elicit a number of comments from many users, which, based on Epicentre review, were deemed as “solvable misunderstandings that require clarification from the method developers”29. In general, the review had a strong focus on the SQUEAC and SLEAC methodology as these are the methods that have already completed full development and have existing full documentation27. S3M, on the other hand, is still a method in development (though it is now further along the development process than when it was reviewed) and existing documentation of the method, though comprehensive, is still pending full compilation to include learning points from the more recent and current testing of the method.

Much of the development and subsequent mainstreaming of these coverage assessment methods have been focused on coverage of SAM treatment30 with only a handful of coverage assessments done on SFP for MAM. This focus on SAM is most likely due to the methods’ developmental origins. They were developed in part to be used as tools for generating evidence that coverage achieved by the then alternative method of CTC was higher than the more entrenched and clinical therapeutic feeding centres (TFC) approach for the management of SAM. However, documentation and literature on the coverage assessment methods articulate that although testing and eventual usage of the methods were particular to assessing the coverage of treatment of SAM, they may also be used for assessing the coverage of SFP for MAM with some minor modifications15,19. Table 2 provides a list of coverage assessments undertaken in the past 10 years that have included the measurement of MAM coverage.

Table 2: Coverage assessments that include MAM coverage

Country Year Survey Location Survey Type Survey Implementer
Malawi 2007 Not identified CSAS Concern Worldwide and Valid International
Somalia 2010 Mogadishu SQUEAC Oxfam Novib, SAACID and Valid International
Kenya 2013 West Pokot County SLEAC Accion Contra la Faim and Valid International
Kenya 2013 Chalbi District, Marsabit County SQUEAC Concern Worldwide
Chad 2013 Mile and Kounoungou Refugee Camps SQUEAC Coverage Monitoring Network, International Medical Corps, UNHCR
Chad 2013 Bredjing Refugee Camp SQUEAC Coverage Monitoring Network, International Medical Corps, International Rescue Committee, UNHCR
Chad 2013 Gaga Refugee Camp SQUEAC Coverage Monitoring Network, Bureau d’Appui pour la Santé et l’Environnement, UNHCR
Chad 2013 Ouré-Cassoni Refugee Camp SQUEAC Coverage Monitoring Network, International Rescue Committee, UNHCR
Chad 2013 Farchana Refugee Camp SQUEAC Coverage Monitoring Network, Bureau d’Appui pour la Santé et l’Environnement, UNHCR
Chad 2013 Treguine Refugee Camp SQUEAC Coverage Monitoring Network, UNHCR
Chad 2013 Garbatulla sub-county, Isiolo County SQUEAC Coverage Monitoring Network, Accion Contra la Faim, UNICEF
Kenya 2014 Garbatulla sub-county, Isiolo County SQUEAC Coverage Monitoring Network, Accion Contra la Faim, UNICEF
Kenya 2014 Garbatulla sub-county, Isiolo County SQUEAC Coverage Monitoring Network, Accion Contra la Faim, UNICEF
Somalia 2014 Not identified SQUEAC World Vision International
Kenya 2015 Hagadera Refugee Camp, Dadaab SQUEAC Coverage Monitoring Network, Accion Contra la Faim, International Rescue Committee, World Food Programme, UNHCR

Table 2 is based on information collected from publicly available reports and/or results of coverage assessments that included MAM coverage indicators. Prior to 2010, only one coverage assessment included MAM coverage. This is consistent with Navarro-Colorado and colleagues’ observations at that time of the dearth of assessment and reporting of MAM coverage23. In the past 5 years (2010 – 2015), there have been considerably more coverage assessments implemented that have included treatment of MAM. These assessments were mostly undertaken by CMN and its partners, mainly in three countries. Only three of these coverage assessments were directly supported by WFP. The assessments done in Chad refugee camps were specifically for MAM coverage assessment only, while the rest were joint assessments of SAM and MAM coverage. The majority of these assessments used SQUEAC as the assessment method. These exemplify the earlier point made by the coverage assessment methods developers regarding the use of the methods not only for SAM but also for MAM coverage.

Coverage assessment design considerations

The design of a MAM treatment coverage assessment method needs to consider the following:

Specific objective of the assessment

The purpose for which the programme is being assessed should be the primary factor in determining the type of method chosen, to ensure that results meet programme needs. Typical questions that should be asked about the assessment objectives include:

What learning is required from the assessment outputs? The specific type of learning output required by the programme from the coverage assessment affects the choice and design of method/s. The learning outputs are related to the area at which coverage results are to be representative, the type of coverage results needed by the programme, whether maps of coverage results are required and the level of detail on factors affecting programme coverage. Hence, the following related questions should be considered:

  1. Is a coverage assessment needed for a large area (i.e., at the highest administrative level (level 1) of a country such as a region, province or state, sub-nationally or nationally) or a small area (i.e., at a lower administrative level (level 2) such as a district, department or municipalities or specific catchment area of health clinics or distribution sites)? It may be needed for a collection of small areas that when put together form a large area or a set of large areas.

  2. What type of coverage results does the programme require? Is a coverage classification (i.e., whether coverage is high or low based on a set standard or threshold) sufficient or is a coverage estimate (i.e., specific percentage of coverage with a confidence interval) required?

  3. Are maps of the results needed?

  4. Is a detailed list of barriers and boosters to intervention coverage required?

  5. When will the coverage assessment be done and how frequently will assessments be conducted?

The planned or intended timing and frequency of the coverage assessments will also affect design. Timing refers to the period within the programme’s project cycle in which the coverage assessment is intended to be implemented:

  • The coverage assessment can serve as a needs or baseline assessment to determine the status quo.

  • The coverage assessment can form a component of programme monitoring and evaluation in which the assessment is done at midpoint and/or endpoint or at specific period intervals of the project cycle to assess programme progress towards its coverage goals and/or provide information around how better to achieve these goals. If the coverage assessments are to be done at the start of the programme and at the end of the programme as part of a comparative study to evaluate the change in the level of coverage achieved, then it would be more suitable to use a method that would be able to provide coverage estimates rather than just coverage classifications. On the other hand, if the coverage assessments are to be done more frequently (i.e., bi-monthly, quarterly, every six months) as part of a programme monitoring mechanism, a coverage assessment method that is rapid and relatively cheap to implement but only classifies coverage and does not provide a detailed list of factors influencing coverage may be more appropriate. Finally, if the coverage assessment is planned for the end of the programme as a means of reporting to programme funders the coverage achievement, then a coverage assessment method that provides both coverage estimates and detailed information on factors impacting coverage might be most suitable.

  1. Will capacity-building on the design, planning, implementation and data analysis and interpretation be a primary need from the assessment?

Depending on the previous survey experience and knowledge of the person/s or team involved in the design and planning of the coverage assessment, some form of capacity-building may be required as part of the survey implementation. A team that has already had previous training and/or exposure to conducting the coverage assessment will most likely be able to implement the assessment with little need for additional time and resources (i.e., internal or external experts) for training and capacity-building. This would mean that the assessment can be done within a set timeframe just for survey implementation (i.e., designing and planning of survey, survey implementation and data collection, data analysis and reporting) and with a set of resources. However, a team that will be doing a coverage assessment for the first time and has little or no experience or training in conducting coverage assessments will need more resources in terms of time and expert support for capacity-building. These considerations will have an impact on the overall resource requirements of the coverage assessment and could have implications for the design of the assessments.

Resources available

The resources available for a particular coverage assessment will determine whether a particular approach is feasible, aside from whether it is the most suited to the programme objectives. Resource constraints may include: 1) budget; 2) staff; 3) vehicles; 4) time; and, 5) input data available (e.g., exhaustive village lists or maps with village locations).

Coverage estimator

Another design issue relates to case definition, in determining who is covered and not covered. Where the objective of an assessment is to measure the coverage of MAM treatment by a specific selective feeding programme, the focus of the assessment will be the beneficiaries targeted for supplementation by the programme in question. This means that the assessment may exclude other supplementary feeding modalities or vulnerable groups that do not fall under the scope of the programme under evaluation (e.g. blanket SFP or pregnant and lactating women). Based on this, coverage estimators were developed that were largely based on existing estimators used for estimating coverage of SAM treatment and specified in current coverage assessment guidelines27.

Currently, there are two estimators used: point and period coverage. Point coverage is the number of current SAM cases in a treatment programme divided by the total number of current SAM cases. Point coverage provides a snapshot of programme performance, putting a strong emphasis on the effectiveness and timeliness of case-finding and recruitment27. Period coverage, on the other hand, is the number of current and recovering cases in a treatment programme divided by all current SAM cases and recovering cases. It approximates treatment coverage much better (albeit with limitations) as it accounts for children who are no longer cases but are in the programme. A recent Epicentre review29 has highlighted the existence of these two coverage estimators as a source of confusion, with the risk that implementers may choose to assess period coverage (rather than point) simply because it produces a higher estimate. The review suggests that both coverage indicators could be reported, with sufficient context (e.g. on length of stay, timeliness of admissions, etc.) to allow for their interpretation. For this reason work under this project has further developed the coverage estimators to address some of the confusion around them. This work has included:

  1. A shift in terminology that is more descriptive and specific with regard to what the estimator is actually measuring, allowing both measures to be reported together without confusion.
  • Point coverage is now named ‘case-finding effectiveness’ to more precisely reflect it as a measure of the programme’s ability to find and recruit current cases. This indicator assesses how good the treatment programme is in finding cases of MAM and then getting them to treatment. A programme with effective case-finding will always have good overall coverage when this estimator is used.

  • Period coverage is now named ‘treatment coverage’ as this is the estimator that approximates this coverage indicator the closest.

  1. Improvement of the period coverage estimator so that it can be used more precisely as a single coverage estimator for treatment coverage.

We propose therefore that programmes report both case-finding effectiveness and the improved treatment coverage indicator and, crucially, give sufficient context to properly evaluate both estimates. For a more detailed description of the definitions of these two estimators, their differences and limitations, and the development work completed under this project, see Annex 3.

Spatial distribution

Geography and location are important factors that affect coverage of any programme. Assessing the spatial distribution (i.e. areas of high coverage and areas of low coverage) of MAM treatment coverage is therefore important in fully evaluating the programme’s coverage performance. As such, the methods for these assessments were designed to provide at least an indication of how MAM treatment coverage is spread geographically (i.e. whether coverage is spread evenly or unevenly throughout the area surveyed). This enables development of maps showing spatial distribution of coverage estimates throughout the area surveyed.


The areas selected for the pilot coverage assessments vary greatly in size, with some at the first administrative level (i.e. region) and others at second administrative level (i.e. district). Scalability of the assessment method, or the ability to assess MAM treatment coverage across different geographical area size, is therefore an important design feature. A single coverage assessment method that is cost effective for both small areas and wide areas would be the ideal; however the objectives and cost constraints of assessments vary with scale. As an extreme example, the learning objectives from an assessment of a single programme catchment will be generally very different from a national survey – the former may be looking for specific, detailed information on barriers and boosters affecting that programme’s coverage whereas the latter will more likely be trying to inform national planning, and may not require such detailed information, or may not be able to feasibly collect it.

A general summary of the appropriate scale and relative cost of the available assessment methods is provided in Table 3. However, it should be noted that these are only two of many considerations in choosing a design. A more comprehensive comparison between surveys designs may be found in Table 1.

Table 3. Summary of scale considerations for main coverage assessment methods

Method Size of programme Cost intensity Scalability
CSAS Small to medium area: Programme site catchment to district level moderate to high Not scalable: wide or multiple areas should instead use SLEAC or S3M methods
SQUEAC Small to medium area: Programme site catchment to district level low to moderate low to moderate Not scalable: multiple districts will require multiple SQUEACs, greatly increasing costs. Investigative aspects can be made routine in programme monitoring, to decrease costs for (repeated) surveys
SLEAC Small to wide area: Programme site catchment to district to national level low Economies of scale as multiple contiguous areas can be included in a single survey.</td.
S3M Wide area: Regional to national level moderate to high Economies of scale as multiple contiguous areas can be included in a single survey

Skills capacity

The type of skills capacity required for planning, designing and implementing coverage assessments can be grouped into the following skill sets:

  1. Survey sampling and design

  2. Survey logistics and implementation

  3. Data management

  4. Data analysis

  5. Reporting

As mentioned earlier, all the methods were developed with a spatial orientation hence they all require similar or related spatial survey sampling skill sets for all stages of sampling. These spatial survey sampling techniques include 1) areal sampling using square or hexagonal grids; 2) systematic sampling using complete list of sampling locations organised by administrative units; 3) mapping and segmentation techniques for the sampling locations; and 4) census or house-to-house approach to case finding. In addition, skills in estimating required sample size for each coverage assessment method are needed to complete the survey design.

Based on the survey sampling and design, skills in logistics and implementation are required in order to effectively run the survey, specifically the field sampling and data collection component. This skill set is much harder to teach didactically as it requires experiential learning and learning-by-doing given that conditions and parameters of a survey implementation vary between various settings/locations.

Data management skills are also vital for conducting a coverage assessment. This skill set includes 1) understanding how to use the survey instruments and how to fill in the data collected by these instruments; 2) developing an appropriate data entry system to gather all the data collected using appropriate software tools (an example of a data entry system based on EpiData can be found here; and 3) application of a data verification and cleaning system to ensure that raw data collected is accurate.

Data analysis skills will be needed to be able to process the data and output analysis results. This basic skill set includes 1) calculating the coverage estimators to be used; 2) applying estimation or classification techniques (i.e., lot quality assurance sampling calculations); and, 2) generating appropriate graphs and charts (i.e., Pareto charts) for illustrating the factors affecting coverage.

For particular methods such as S3M, however, some intermediate to advanced-level data analysis skills will be required in order to produce all the types of output relevant to the method. These skills include 1) understanding of data structure and relevant data manipulation techniques; and, 2) using RAnalyticFlow to run mapping, analytic and reporting programmatic scripts.

Finally, given the results of the survey, skills in interpreting and reporting results will be needed. This skill set includes 1) interpreting coverage estimates or classifications; 2) interpreting factors affecting coverage; 3) analysing implications of results to the programme; 4) reporting of results and relevant interpretation; and, 5) recommending appropriate action needed to improve programme coverage.

Country and area selection

Four countries were initially chosen by WFP for the pilot coverage assessments primarily based on the country programmes’ interest in conducting a coverage survey of their SFP. These countries were Chad, Democratic Republic of Congo, Niger and Pakistan. After initial selection, the DRC country programme decided not to pursue their participation in the pilots. Further discussions were made with the three remaining country programmes to determine the specific objectives of the assessments and which specific areas within the country were to be included. Because this was intended to be a set of pilot surveys, practical factors such as cost, area size and area access had to be balanced with learning factors such as ensuring ideal teaching and training conditions and maximising learning opportunities. Whilst these two sets of considerations are not necessarily contradictory, particular constraints (e.g. security issues, large size of intended coverage area, etc.) on practical factors limit what is possible in terms of learning. For example, a consultant trainer may be unable to accompany the teams in the field due to insecurity, thereby limiting live field training, or a large preparation overhead due to large survey area size may reduce length and quality of training. Equally, ideal training conditions for learning (i.e. small manageable survey area and small survey team allowing maximum training time and support) are not always possible given the practical considerations of needing to accomplish a survey of a specified programme area.

For Chad and Niger, the size of the programme area to be assessed was the main factor that influenced the coverage assessment method chosen. In both countries, a level 1 administrative unit, in this case the region, was chosen for the pilot coverage assessments. For Chad, the regions of Kanem, Barh-El-Gazal and Batha were chosen while for Niger the choice was Zinder. In terms of type of coverage results required, coverage estimates for each region were needed along with a detailed list of barriers and boosters to coverage. Given these considerations, the most appropriate coverage assessment method for both settings was a wide-area method able to provide estimates for each region. The only two methods that can be scaled to such a wide area are SLEAC and S3M, but since coverage estimation (rather than classification) was the coverage result required, S3M was selected. It should be noted that neither SLEAC nor S3M can provide a detailed list of barriers and boosters to coverage. At most, these methods can only provide a ranked list of barriers to coverage. The only method that can elicit detailed information on barriers and boosters is SQUEAC, particularly its investigative components. Hence, it was proposed that once the S3M was completed and areas of high and/or low coverage identified, focused investigative SQUEACs could then be conducted in selected high/low areas to elucidate detailed barriers and boosters to coverage. However, limitations with the budget and timeframe for completing the survey meant that the SQUEACs could not be implemented.

For Pakistan, similar factors of size of the programme and type of coverage results required were taken into consideration in selecting the most appropriate coverage method. Three distinct and non-contiguous districts were chosen as the survey areas for the pilot coverage assessments in the country. The WFP country office also stated that the programme does not operate in the entirety of each district. Instead, the programme only covers specific villages within particular union councils that lie within the catchment areas of health centres or distribution sites for TSFP. The programme size and the geographic structure of how the programme operates indicate the need for a coverage assessment method for a small area. The WFP country office also wanted coverage estimates for each of the district programmes and detailed list of barriers and boosters. Given these considerations, SQUEAC was the most applicable method to be applied in each of the districts. Hence, three SQUEACs were implemented.

Capacity assessment and actions

The design of the survey implementation took into account mechanisms for knowledge transfer to in-country WFP programme staff and/or local counterparts. The pilot coverage assessments were planned in such a way that those involved were trained on-the-job and additional training and mentoring support were provided during the course of the assessment, through structured training sessions, training guides and documents and through online resources.

It should be noted that there was no expectation that by the end of the process participants would have been able to conduct coverage assessments by themselves as proficiency in the methods would take more time for training and gaining of experience. The main aim of this capacity-building process, therefore, was to have participants become familiar with the steps to be taken, the skills required and the how-to guides and resources available for conducting the specific type of coverage assessment implemented in the respective country programmes. This familiarity would then enable the participants to make a self-assessment of their skill sets and identify those on which they would need further mentoring and support.

To support this self-assessment, a training allocation form was formulated. This form was initially filled out by the survey technician from Valid International who provided the on-the-job training, by specifying the various knowledge and skill sets that were covered during the pilots, identifying those who had participated and then commenting on the level of capacity reached at the end of the process. The form was then given to the respective staff / participants as a form of feedback, and they were given the opportunity to make their own self-assessment regarding the mentorship they had received and the additional new and/or refresher training they needed based on the list of knowledge and skills sets identified. This feedback from participants then served as the basis for further support and mentorship provided by the Valid International survey technicians.

The administering of the form was done at different stages of the pilot survey process. For Chad and Niger where the surveys were done first, the training allocation form was finalised when the survey process was completed; therefore they were only filled out at the end. This meant that the feedback gathered from the self-assessment only became available when the Valid International survey technician was already out of the country and unable to act upon the further training needs identified directly with the respective staff. To address this, a series of remote training sessions were designed and conducted based on the further mentoring and support needs identified by the self-assessment. For Chad and Niger, these needs were on more in-depth discussion and practical exercises on the stage 1 sampling for S3M and the data management and data analysis required. These sessions were held through teleconference via Skype® between the Valid International survey technician and the relevant country office staff and aided by online guides and exercises on stage 1 sampling and on data management and data analysis . These remote sessions were then followed by a week-long meeting in Niger where training sessions on data management and data analysis were conducted after a one-day session in which the results of the pilot survey in Niger were presented to a wider audience of in-country stakeholders . At the end of this meeting, the general feedback from the participants was that the gaps in skills and knowledge they identified earlier were addressed and that they would potentially be able to try to conduct their own coverage assessments in the future.

For Pakistan, on the other hand, the training allocation form was filled out midway through the coverage assessment process. This meant that the further mentoring and support needs identified by the self-assessment were acted upon by the Valid International survey technician alongside the survey implementation. The topics identified for further training were on the investigative components of SQUEAC which included analysis of routine programme monitoring data and analysis of additional data collected from clinics and distribution sites and mentorship on sample sizes required for the survey and the stage 1 sampling process. Additional mentoring sessions were held for these topics supported by the FANTA SQUEAC and SLEAC technical reference27 and by online resources specifically created for this purpose.

Country reports

For a detailed report of the process and outputs of the pilot assessments in each of the countries, see the website produced for this project.

Synthesis of lessons learned

The pilot coverage assessment process highlighted the country offices’ awareness of the new coverage assessment requirement in the current WFP SRF and their willingness and desire to measure this across their programmes. However, the country offices faced a lack of human resource with the capacity to determine the most appropriate coverage assessment method to use, given their particular type of programming, and to support its eventual implementation. With this limitation of not knowing which appropriate method to use, they found it challenging to determine whether the current resources they already had or were planning to allocate for coverage would be adequate. This gap in knowledge and capacity is therefore critical and is the basis of most of the lessons that we discuss in this section.

Designing and implementing the coverage assessment

1. Explicit and ongoing guidance is needed to select survey design/methodology for different countries and contexts. With the range of new methods available to assess coverage of treatment of MAM, choosing the most appropriate and cost effective approach for any given scenario is challenging for COs and other decision-makers within WFP. Even where there is technical nutrition/epidemiology capacity available in country, it can be difficult for technicians to keep abreast of work on methods that are still being developed and improved. Whilst the methods (SQUEAC, SLEAC and S3M) piloted in the three countries of this project’s focus appeared fit for the purposes and objectives in these specific contexts, there are a very large range of variables (including factors such as the objectives of the assessment, the level of representativeness of results required and the resources available) that can affect selection of the most appropriate method in any context. This is especially true given that the methods under test during this project are relatively new and still being adapted to improve effectiveness and cost-effectiveness.

Such guidance would need to support decision-making around the following issues identified during implementation of this project:

a. Agreeing the survey objectives and the area to be covered by the assessment: These two factors are linked. Where the objective is to assess overall coverage i.e. obtain a programme-level coverage estimator for a MAM treatment programme, the ‘sub-sampling’ of areas to survey within the programme area is not possible. Here, prior identification of the programme area is essential and the difference between the “intended” and the “actual” area covered must be understood and agreed. If the programme is intended for the whole district, then the survey should not exclude villages that have not been ‘reached’ by the screening team. There is a difference between not covered and not reached.

b. Timing of coverage survey within the programme implementation cycle: To ensure the results of a survey are of maximum value to programme/organizational learning and fulfil the objective of the assessment, the survey should be conducted at the appropriate time. This needs to be considered at design stage. Where the survey is being conducted primarily to assess coverage of an operational programme, it should be conducted:

  • When the programme is fully operational to ensure results reflect the reach of services at full potential

  • Long enough after programme start to ensure that results are not reflective of the start-up problems common to many large selective feeding interventions

  • Long enough before programme end to ensure that results are not reflective of the scaling down of operations

Where the survey is being conducted to provide a baseline/endline evaluation of the impact of operations on MAM treatment coverage, timing of implementation will need to be adapted to a ‘baseline/endline’ sequencing schedule. Where the survey is being conducted to provide midterm learning on both a) levels of coverage achieved to date and b) the factors supporting and preventing adequate coverage in these areas, purposive sampling within a programme area may be possible

2. Explicit and ongoing guidance is needed to plan the resources and information required to implement different survey designs in different contexts. Resource requirements for every coverage assessment are dependent on the survey methodology selected and the context within which the survey is to be implemented. Resources needed, including budgetary and technical capacity, should be identified very early in the design stage and may play a role in method selection if resources are limited. Early mapping of resource needs could support decision-makers at CO level to advocate for the additional funding and technical capacity required to fill identified gaps before the survey planning and implementation process gets underway.

Such guidance would need to support planning around the following issues identified during implementation of this project:

a. Capacity for the survey process: The skills and implementation capacity required for planning, designing and implementing coverage assessments can be grouped into the following skill sets and will vary according to the specific survey methodology adopted:

  • Survey sampling and design

  • Survey logistics, data collection and implementation

  • Data management

  • Data analysis

  • Interpretation of results and reporting

Whilst capacity for survey sampling and design and data analysis is variable among programme managers and nutrition experts at WFP CO level, the VAM unit does have these requisite skills and there is great potential for them to take a stronger lead in the more technical aspects of future coverage assessments where their capacity allows. However, because the specific methods used to estimate treatment coverage are quite new and require spatial survey sampling skill sets for all stages of sampling and data analysis, there is likely to be a need for ongoing (in the short-medium term) technical support to be available to them either in country or through specially developed guidance tools and applications. This technical support could guide the VAM unit through the more technically complex/unfamiliar stages of sampling, data analysis and results interpretation. All WFP COs we worked with during this project did have good project management and logistics capacity but earlier identification of needs would have facilitated their timely availability. WFP COs were unlikely to have sufficient data collection staff available for the time needed. Working with partners (such as the national MoH) to supplement data collection and survey implementation capacity worked well in all countries to fill this capacity gap as well as to support buy-in to the survey process and outcomes from national partners. This is a strategy that should be maintained for future assessments. A dedicated survey team, including project manager and data collectors, should be involved and on hand for the duration of the survey. This is particularly important for surveys that have a training and capacity-building element. Future implementations of coverage assessments would therefore need to take this into account, especially if they aim to train others on the methods. In Pakistan where SQUEAC was implemented, staff were already familiar with the method, having seen it or participated before for the assessment of SAM coverage. This was helpful because their exposure meant that they could be guided through the process rather than just taught, with more participation and engagement. For Niger and Chad where S3M was implemented, given the developmental stage of the method and the lower level of documentation, there was a steeper learning curve and significantly more intense training was required.

b. Early budget development: Closely linked to capacity requirements, budget requirements need to be developed in parallel and as early as possible in the assessment planning process. Costs of implementation are dependent on survey design (see Table 3) and on the context of implementation.

c. Information requirements for every coverage assessment are also dependent on the survey methodology selected and the context within which the survey is to be implemented. Sufficient time and resources need to be scheduled in the work-plan at design stage for both administrative/logistical needs and the early information/data needs of the assessment to be met in parallel. If early information/data needs are not met in a timely fashion, this can lead to considerable delay in the assessment workplan. These requirements should always include (at a minimum):

  • Compilation of recent routine programme monitoring statistics where they are available. These data are critical to support both survey planning and the interpretation of survey results.

  • The early provision of detailed maps that cover the survey area are critical. WFP has great capacity for mapping in general, with detailed maps in high resolution for almost all countries where they operate. Making these maps (particularly the data used to create these maps) available for coverage surveys will be essential for all coverage assessments, particularly if the mapping of coverage results is considered to be an important output of the coverage assessments.

A checklist, for use by the programme/survey team, of the various information needs at design stage for each type of assessment would help to plan and prioritise preparation efforts and resources.

3. Ongoing guidance and strengthening of mechanisms for data collection would support data precision and accuracy for future surveys. The implementation of any survey or assessment in remote areas and under challenging conditions will always pose challenges for ensuring that the data collected is of high quality. This is particularly true of coverage surveys, because the methods are not as mainstream as other survey methods and thus require more supportive supervision. Several ideas were highlighted during data collection across all three countries that could help to assure data reliability:

a. Templates for the collection of information and data at all stages of design and implementation. Standardised templates for collecting information and data needed at design stage and during data collection is likely to support better data/information quality. This will also improve the efficiency of survey implementation and allow more time for analysis and interpretation of data.

b. Use of electronic data entry systems could add considerable value both in terms of savings in time and human resource and also in reducing data recording and entry error. However, the creation of these forms to ensure appropriateness for their purpose and for practical use takes time, effort and appropriate technical capacity. As such there would be cost implications of introducing such systems both in terms of the initial outlay for data collection equipment and for the training for its use.

c. Use of GPS receivers / GPS-enabled devices to locate villages and to record their locations: this is particular to S3M but they can also be used for the other survey methods if the devices are available. Involvement of VAM who have data on villages and their locations will greatly improve the quality of data for village locations.

d. Translation of survey materials (primarily data collection forms) into the local language would support the quality and reliability of data collected. It is also important to ensure all survey participants can speak the local language or that appropriate mechanisms for translation be put in place.

e. Women on data collection teams: In contexts where culture inhibits the movement of women and their interactions with males, sourcing a data collection team that has a high proportion of women is critical to gaining the access required to survey participants.

Coverage results and technical survey design

  1. Generally, coverage in assessed areas was low. The coverage results in the programme areas assessed during the pilot surveys generally reflect the fact that there was minimal / limited service provision during the survey period, due to the renewal of agreements with cooperating partners. The coverage results were mostly low in almost all programme areas assessed, with the exception of one catchment area in one of the districts in Pakistan.

  2. Coverage reflected length and concentration of programme presence. Coverage tended to be higher in areas with more established programmes and / or with more implementing partner presence as shown in Chad and in Niger. This is likely to be reflective of a more collaborative, coordinated approach between all key stakeholders, which is essential for reaching the need that exists.

  3. Coverage results reflected awareness of treatment services rather than awareness of MAM. In general, there is great awareness of the condition of acute malnutrition in children at community level. It is the knowledge or awareness of treatment services provided for those who are acutely malnourished that needs to improve. This may be partly attributed to the fact that the treatment service itself was often limited or on hold. In most cases of low coverage, screening and case finding practices also need to be overhauled to ensure early and regular identification of MAM children through a wider network of actors at community and also at government and private health facility level.

  4. Information and coverage outputs of methods used met expectations. S3M was able to provide overall coverage results and a map of coverage of the programme areas surveyed. It was also able to provide ranked list of barriers to coverage. On the other hand, SQUEAC was able to provide an overall coverage result for each of the districts surveyed and a detailed list of barriers and boosters to coverage, which were ranked and then presented either using a concept map and / or a mind map. A sense of spatial distribution of coverage was also reported although actual maps were not produced.

  5. SQUEAC investigation of SFP coverage does not necessarily require the use of Bayesian analysis. The pilot SQUEAC survey in Pakistan showed that adequate sample sizes of MAM cases can be reached to be able to report an estimate of coverage with acceptable precision. Hence, chronology of steps in conducting SQUEAC investigations can potentially be re-ordered given that the investigative component of SQUEAC is not required first. This in turn has the potential of making SQUEAC investigations for SFP coverage much more efficient.

  6. Classification of MAM coverage can generally give results even for smaller administrative areas. Lot quality assurance sampling (LQAS) analysis used in SLEAC can be done and hence can be considered as an option if classification of coverage above or below a standard or threshold is an acceptable output. The required sample size for reporting coverage classifications was generally reached in all the survey areas. This can be used as an option if rapidity of method is paramount or if disaggregation of overall coverage results is considered an important output. This SLEAC classification can be used for the lower administrative unit coverage results, as this will require much smaller sample sizes, and then an aggregated coverage estimate (an exact proportion with a confidence interval) across the various administrative units can be reported.

  7. S3M sampling unit size should remain small. Size of sampling unit for S3M should be kept as small as possible and definitely smaller than the size that was used for the pilot surveys in Chad and Niger. This is particularly important so as to be able to disaggregate results down to the lower administrative units (either as classifications or estimates of coverage) and to be able to increase the robustness of the high-resolution mapping results produced by S3M. A resolution of between 100 – 200 km^2 would be ideal (which translates to a distance value of no more than 12 – 15 km between each sampling unit).

Conclusions and Recommendations

The pilot surveys have indicated the viability of both the SQUEAC and S3M coverage survey methods for the assessment of TSFP coverage. Surveys across the 3 WFP country programmes assessed were implemented to the planned time and budget requirements and there was buy-in to outcomes and results from key national stakeholders (including the WFP COs). This is an important finding that will support uptake and use of results for organizational learning from these and future surveys.

For SQUEAC, the pilots in Pakistan have demonstrated its functionality and utility as a survey method for small-scale programmes up to the level of the district. Modifications and simplifications to the technical design of the SQUEAC analysis tested during this project are likely to support its further use, with the focus remaining on the investigative aspects of the method, its strongest feature, that are harnessed to assess the boosters and barriers to programme coverage. The pilots for SQUEAC explored several possible modalities of training and supervision, the results of which are supportive of the possibility of adopting a remote approach to support. This will prove useful in contexts where in-person support may be unfeasible due to security/access issues and/or budget limitations. Similar models of SQUEAC implementation have been tested by ACF-UK and currently CMN31. We propose some systematic adjustments to preliminary phases of the SQUEAC investigation, such that analysis of routine programme data and auxiliary data is done regularly as part of programme monitoring and evaluation. This would support the ongoing programme M&E cycle and would also considerably reduce the costs of a SQUEAC. We believe that SQUEAC together with these proposed modifications has great potential to be utilised for recurring and quick coverage assessments of WFP’s small-area programmes, or of purposive sampled small-areas within WFP’s wide-area programmes.

The S3M offers an option as a coverage assessment method for programmes that are wide-area, or at large scale (e.g., multiple districts, regions, sub-national, national). The two pilots in Chad and Niger exemplified the scalability of S3M with the two surveys covering 3 and 1 region/s respectively. In addition, this work has demonstrated the ability of S3M to generate maps of the spatial distribution of coverage across the entire survey area. Such maps enable more refined diagnosis of areas that are succeeding or failing in terms of coverage, and this in turn supports detailed programme planning and decision-making. Through the pilots, we were able to investigate optimal sizing of sampling grids/units for the stage 1 sampling. Outcomes of this investigation have underlined the need to (as much as is possible within resource constraints) set sampling grid size as small as possible (around 100 sq. km up to at most 200 sq km). Even at this smaller grid size, we have demonstrated how efficient the survey process can be. The survey in Chad, with 133 sampling points, was accomplished in 17 days (average of 8 sampling points surveyed in a day by 10 teams) while in Niger, with 78 sampling points, it was accomplished in 2 weeks. With further adjustments and improvements in the survey planning and preparation and potentially with the use of electronic data collection systems (all discussed under section 3 above), the survey could be made more efficient thereby reducing time and costs of implementation.

Whilst this project has enabled demonstration of the general adaptability and application of SQUEAC, S3M and SLEAC to the measurement of the new coverage indicators in WFP’s SRF (2014-17), it is not able to make specific recommendations on the method most suited for every country/programme context within which WFP is operational. As discussed earlier in this report, choice of method and the way in which it is to be implemented is dependent on various features of the country programme, the type of coverage information requirements, and the level of resources available, both material and human capacity. Hence, to respond to the need of RBs and COs for explicit and ongoing guidance in this area, we have developed the first phase of a ‘Coverage Advisor’ tool as part of this project. This tool has the potential to guide COs on the survey method most appropriate for their needs and also to identify the capacity and budgetary requirements of the assessment. Beyond the broad direction on appropriate applications of the different survey methodologies given in this report, development of specific global recommendations on which methods to use for future coverage surveys of WFP interventions requires a basic assessment of each of the country programmes. We therefore recommend the following immediate steps to initiate this global assessment:

  1. All concerned WFP country programmes intending to conduct a coverage assessment of their SFP should use the Coverage Advisor tool . This tool will ask a specific set of questions from the respective WFP country programme that describe and detail the different features of the programme and the country context in which it is operating. It is organised into different sections including 1) information on area / location of the programme; 2) coverage information requirements of the programme; 3) country demographics and related nutritional information; 4) survey cost calculations; and 5) capacity assessment of the WFP country programme and its implementing partners. At each stage of the assessment, the tool will provide specific advice on the most appropriate coverage assessment method to use, the related guides and information to support implementation, the amount and cost of resources needed to implement such a method and an assessment of the country programme capacity to implement the recommended survey method/s. The specificity and the detail of the advice that will be provided by the Coverage Advisor tool will depend on the amount of information provided by the user. Once the assessment is finished, the results can be printed out by the user as a reference for the actions needed to implement the recommended survey, can be saved for future reference or can be submitted as data to the tool developers. Submission of the data / information provided will allow the tool developers to collect the submitted information and analyse it for purposes of improving the tool and also for providing additional support to those who may need more advice or support beyond that provided by the tool.

  2. As each country completes the assessment, it will document the recommendations for survey design as well as the total cost and required resources and the broad capacity gaps that require filling to implement surveys. This could be compiled iteratively to form a global analysis of coverage assessment needs, which would support WFP to plan and manage resources accordingly.

  3. Depending on the capacity and skill sets that have been identified by the Coverage Advisor tool as lacking or needing further support at CO level for survey implementation, appropriate learning tools (most of which have been developed already) will be identified by the tool and links and downloads for these materials will be made available to the user. For those COs with adequate survey competencies and a strong VAM unit, these materials and resources may provide sufficient guidance for them to conduct their own coverage assessments in the future. For those COs that have less capacity, the tool will advise contacting known service providers who could provide appropriate training and support to implement the coverage assessment.

In order to get the assessment of coverage mainstreamed throughout the organisation and through its country programmes, WFP requires detailed analysis of global assessment needs as a first step and subsequently high-quality and ongoing guidance to support survey design and implementation. We believe that the use of the Coverage Advisor tool in conjunction with relevant technical support, where required, could provide a good basis for this. The tool has been developed by those who have implemented the pilot surveys for WFP, and is maintained through a private server using open-source tools. As such its continued development and use to support the method selection, capacity assessment and capacity-building needs identified in the lessons learned from the pilot surveys could come at little cost to WFP.


  1. Bryce, J. et al., 2013. Measuring Coverage in MNCH: New Findings, New Strategies, and Recommendations for Action L. Chappell, ed. PLoS medicine, 10(5), pp.e1001423–9.  2

  2. Bryce, J. et al., 2008. Maternal and child undernutrition: effective action at national level. The Lancet, 371(9611), pp.510–526. 

  3. International Food Policy Research Institute, 2014. The coverage of nutrition-specific interventions needs to improve. In Global Nutrition Report 2014: Actions and Accountability to Accelerate the World’s Progress on Nutrition. Washington, DC: International Food Policy Research Institute, pp. 1–9.  2

  4. Independent Expert Review Group on Information and Accountability for Women’s and Children’s Health, 2012. Every Woman, Every Child, from Commitments to Action: The First Report of the Independent Expert Review Group (iERG) on Information and Accountability for Women“s and Children”s Health, Geneva: World Health Organization. Available at:  2

  5. Bhutta, Z.A. et al., 2008. What works? Interventions for maternal and child undernutrition and survival. The Lancet, 371(9610), pp.417–440. 

  6. Bhutta, Z.A., Das, J.K., Rizvi, A., et al., 2013. Evidence-based interventions for improvement of maternal and child nutrition: what can be done and at what cost? The Lancet, 382(9890), pp.452–477. 

  7. Bhutta, Z.A., Das, J.K., Walker, N., et al., 2013. Interventions to address deaths from childhood pneumonia and diarrhoea equitably: what works and at what cost? The Lancet, 381(9875), pp.1417–1429. 

  8. World Food Programme, 2013. WFP Strategic Results Framework (2014-2017), Rome: World Food Programme. Available at:  2 3

  9. World Food Programme, 2009. Strategic Results Framework, Rome: World Food Programme. Available at: 

  10. Henderson, R.H. & Sundaresan, T., 1982. Cluster Sampling to Assess Immunization Coverage - a Review of Experience with a Simplified Sampling Method. Bulletin of the World Health Organization, 60(2), pp.253–260. 

  11. Lemeshow, S. & Robinson, D., 1985. Surveys to measure programme coverage and impact: a review of the methodology used by the expanded programme on immunization., 38(1), pp.65–75. 

  12. Bennett, S. et al., 1991. A simplified general method for cluster-sample surveys of health in developing countries. World Health Statistics Quarterly, 44(3), pp.98–106. 

  13. Coulombier, D. et al., 1995. Nutrition Guidelines 1st ed., Paris: Médecins Sans Frontières.  2

  14. Shoham, J., 1994. Emergency Supplementary Feeding Programmes, London: Relief and Rehabilitation Network and Overseas Development Institute.  2

  15. Myatt, M. et al., 2005. A field trial of a survey method for estimating the coverage of selective feeding programmes. Bulletin of the World Health Organization, 83(1), pp.20–26.  2 3 4 5 6

  16. Beaton, G.H. & Ghassemi, H., 1982. Supplementary feeding programs for young children in developing countries. The American journal of clinical nutrition, 35(4 Suppl), pp.863–916. 

  17. United Nations Children’s Fund, 2013. Evaluation of community management of acute malnutrition (CMAM): Global synthesis report, New York: United Nations Children’s Fund. 

  18. Collins, S., 2001. Changing the way we address severe malnutrition during famine. The Lancet, 358(9280), pp.498–501. 

  19. Valid International, 2004. Community-based Therapeutic Care (CTC) T. Khara & S. Collins, eds., Oxford: Emergency Nutrition Network.  2

  20. Van Damme, W., 1998. Medical assistance to self-settled refugees: Guinea 1990-1996. Studies in Health Services Organisation and Policy, (11). Available at:

  21. Van Damme, W. & Boelaert, M., 2002. Therapeutic feeding centres for severe malnutrition. The Lancet, 359(9302), pp.260–261. 

  22. Schilling, P.R., 1990. Supplementary feeding programs: a critical analysis. Revista de Saude Publica, Sao Paulo, 24(5), pp.412–419. 

  23. Navarro-Colorado, C., Mason, F. & Shoham, J., 2008. Measuring the effectiveness of Supplementary Feeding Programmes in emergencies. Humanitarian Practice Network Paper, pp.1–32.  2

  24. Myatt, M., 2004. New Method for Estimating Programme Coverage. Field Exchange, 21, p.3. 

  25. Sadler, K. et al., 2007. A comparison of the programme coverage of two therapeutic feeding interventions implemented in neighbouring districts of Malawi. Public Health Nutrition, 10(09), pp.907–913. 

  26. Myatt, M., 2011. SQUEAC: Low resource method to evaluate access and coverage of programmes. Field Exchange, (33), pp.1–15. Available at:

  27. Myatt, M. et al., 2012. Semi-Quantitative Evaluation of Access and Coverage (SQUEAC)/ Simplified Lot Quality Assurance Sampling Evaluation of Access and Coverage (SLEAC) Technical Reference, Washington, DC: FHI 360/FANTA.  2 3 4 5

  28. Guevarra, E., Guerrero, S. & Myatt, M., 2012. Using SLEAC as a wide-area survey method. Field Exchange, (42), p.40. 

  29. Epicentre, 2015. Open review of coverage methodologies: Questions, comments and ways forward, Coverage Monitoring Network. Available at:  2 3

  30. Godden, K., 2013. External Evaluation: The Coverage Monitoring Network project: Improving nutrition programmes through the promotion of quality coverage assessment tools, capacity building and information sharing, London: Accion Contra la Faim. Available at: 

  31. Alvarez Moran, J.L., Mac Domhnaill, B. & Guerrero, S., 2013. Remote monitoring of CMAM programmes coverage: SQUEAC lessons in Mali and Mauritania. Field Exchange, pp.1–7.