Learning in maternal and newborn health – five essential issues!

Making time for learning from data is so vital for maternal and newborn health policy and programming

Learning in maternal and newborn health – five essential issues!

By Professor Joanna Schellenberg, Principal Investigator of the IDEAS project

I have not failed. I’ve just found 10,000 ways that won’t work.

Thomas Edison, inventor of the long-lasting practical electric light bulb

I have lead the IDEAS project – based at the London School of Hygiene and Tropical Medicine – for nearly seven years now. It’s a multi-country and multi-disciplinary project with measurement, learning and evaluation at its heart. In advance of our satellite session at the Fourth Global Symposium on Health Systems Research, titled “Measurement, learning and evaluation for maternal and newborn health”, I want to reflect on why making the time for learning from data is so vital for maternal and newborn health policy and programming.

Back in 2010 we undertook a measurement, learning and evaluation project in Nigeria, Ethiopia and India, with a focus on maternal and newborn health. The original research questions came from our funder – the Bill & Melinda Gates Foundation. The infographic below explains the context, coverage and outcome indicators in the three countries in which we work – Nigeria, India and Ethiopia.

Whose learning is it anyway?

Throughout the IDEAS project we have worked with many partners including governments, NGOs, implementation grantees of the Bill & Melinda Gates Foundation and other health systems researchers. What we have seen is a great diversity of perspectives and interests in the whole area of measurement, learning and evaluation. It can draw out unexpected enthusiasm from some stakeholders, while for others it is still of low importance.

Measurement, learning and evaluation brings plenty of opportunities but also plenty of challenges. So, here are five issues that six years of IDEAS research has brought us to:

1. What are we evaluating?

The original goals of the project included the evaluation of a strategy which involved 65 innovations put in place through nine partners in three countries over five years. Although all these innovations addressed maternal and newborn health, and all were mapped against a theory of change – complexity was the order of the day.

We developed a common framework for these innovations and a typology to enable everyone to understand the similarities and differences between them, and the expected mechanisms of action.

2. Questions about “how and why did it work” are just as critical as “how much did it work”

When thinking about how to evaluate a strategy that aims to increase the coverage of life saving care, the first question that often comes to mind is – “does it work?

All partners have – in the end – been just as interested to know how and why innovations have worked (understanding the mechanisms of change) as to seeing bar-charts and confidence intervals for the change in coverage of life saving care that occurred as a result.

Despite this, in many contexts there is little capacity for rigorous qualitative research on these mechanisms.

3. Promote implementation at scale

There are critical actions that implementers need to take to catalyse the scale-up of their innovations in possible government health programmes, outlined in the infographic below.

4. Respond to emerging questions

What started out as the evaluation of implementation at scale evolved – through our interest in implementation strength – into implementation research on planning and decision-making at district level in low-income settings. In West Bengal, India, we’re piloting an approach to improve coordination of decision-making and planning for improved health outcomes.

5. Can evaluation lead to learning and better programmes?

Actionable data is at the heart of programme management and quality improvement – and high-quality data is also at the heart of evaluation. So why, then, has sequential evaluation been described by Don Berwick as “toxic to efforts of scale-up”? All too often, says Berwick, what’s being assessed in independent evaluations are forcibly static and constant. This means that implementers might be discouraged from developing a culture of learning.

The IDEAS satellite session at the Fourth Global Symposium for Health Systems Research is within the conference theme of improvement and innovation in health services and systems. It will explore and drill down into many of the issues I have touched upon here. If anything I mentioned here seems familiar, then our session is a must. See you there!


Join IDEAS at the 4th Global Symposium on Health Systems Research!

‘Measurement, learning and evaluation for maternal and newborn health’
Tuesday November 15th
08:30 – 12:00 (PDT)
Vancouver Convention Centre – Room 11

Chaired by Joanna Schellenberg (IDEAS) and John Grove (Bill & Melinda Gates Foundation)

Follow the hashtag #MLE4MNCH or #HSR2016 to join in the debate!

Leave a Reply

Your email address will not be published. Required fields are marked *