Blog

Engaging with the Evidence Experts at the Evidence 2016 Conference

There is a gaping space in evidence informed decision-making (EIDM) in the not-for-profit sector, government and other social development organizations. Many organizations focus on use of data for accountability and do not take a systems thinking approach to learn and adapt. Research and evidence supports this, but few present practical approaches on how to narrow the gap in Africa[1]. The Evidence 2016 Conference brought together various  stakeholders to unpack this gap and present opportunities.

The first Evidence 2016 Conference presented by the African Evidence Network was held on 20-22 September 2016 in Pretoria, South Africa which aimed to bring together research producers, translators and users to explore current EIDM landscapes and approaches to improve EIDM. This was captured in the concise theme: Engage, Understand, Impact; a theme which resonate with the focus of Data Innovator – making data useful.

During the opening ceremony Honorable Minister Naledi Pandor reminded us that “Africa is data rich, but analysis poor.” Presenter’s such as Thomas Scalway from Lushomo Communications demonstrated the value of data vizualisation and communication to condense complex evidence into products which are easier to engage with. His advise was to source the help of the ‘creatives’ as part of an evaluation or research product publication. We at Data Innovator would also recommend options such as using Piktochart, Microsoft Excel and Word and Tableau for in-house production of simple communication products. Discussions around engaging public in evidence aroused much interest in the critical value of such an approach to ensure evidence is contextualized. This facilitates ease of understanding among policy-makers and other users of evidence.

The presentations of various landscapes were depictions of the dynamics at play between the different role players in EIDM. Data Innovator presented a landscape of among Health NGOs in South Africa. The rapid assessment to present the landscape was an insightful exercise. Among the key informants surveyed, a common theme was the significant role donors play in influencing research agendas and competition among research generators and users for funding. It begs the question if we are researching the right things for the right problems? As Ian Goldman, put simply during the government panel discussion, often “Politics trumps Evidence”. Essentially finding the balance in EIDM is more helpful than trying to force feed evidence. In the context of the Sustainable Development Goals, countries in Africa do need to strengthen the ties between evidence and use for improved results.

The Evidence 2016 Conference was a great opportunity to network among EIDM stakeholders, perhaps future representation could increase with more collaboration across other networks and increasing engagement of members. Approaches such as the pods and the visual artists were creative touches to the conference which us Data Innovators appreciated.

dsc_0052

Leaders in EIDM to follow:

@Africa_evidence

@olfac

@Lushomo_Net

@iangoldmansa

@CLEARAA1

@3ieNews

 

 

 


[1] Guijit, I. (2012). Accountability and learning: Exploring the myth of incompatibility between accountability and learning. In: Fowler, A. and Ubels, J. 2013. Capacity Development in Practice. Routeledge. Chapter 21.

Putting your project indicators into perspective

“We have achieved 60% of our target (60 of 100 individuals trained) and are on track with our activities”

“We reached 1200 learners in 5 schools, exceeding our target of 1000 learners”

Statements like these, which explain programme progress in relation to a defined set of indicators, typically form an important part of progress reports submitted to donors.

According to UNAIDS:

“an indicator provides a sign or a signal that something exists or is true. It is used to show the presence or state of a situation or condition. In the context of monitoring and evaluation, an indicator is a quantitative metric that provides information to monitor performance, measure achievement and determine accountability.”

Indicators are particularly important for accountability purposes, but sometimes the work of monitoring and reporting on these indicators overshadows the greater usefulness of what is measured.

Below we present a case of an NGO which was able to change the way it reported its progress against donor-defined indicators to make them more useful to both the donor and the organisation itself.

NGO Africa* found that its narrow use of M&E data was neither useful to the organisation nor adequately conveying their implementation story. The organisation was facing significant pressure from its donor to initiate and implement a new project targeted at supporting service delivery to key population groups. The project faced several implementation challenges and although NGO Africa had put measures in place to resolve these, they were still struggling to reach their targets.

In reporting on their progress to its donor, it was critical for NGO Africa to both demonstrate the project’s value and ‘buy’ enough time to overcome their implementation challenges. However, their existing reporting structure did not allow for this. They were using the standard indicators required by their donor, presented against their targets in a simple tabular format. This format was not demonstrating their actual performance because it did not show the dependency of activities on contextual factor or the relationships between different indicators and between indicators and outcomes.  

This is how we helped NGO Africa put their indicators into perspective:

1. They reviewed their reporting template and identified the critical indicators

You may report on multiple indicators, but it is important to identify the key or strategic measures which inform project outcomes. These can either be standard donor indicators or additional outcome indicators which provide useful data not captured in the standard measures. In this project, indicators related to population service delivery were identified as most critical, while indicators such as the number of communication products developed were seen as less important in measuring and reporting on outcomes.

No Indicator Y1 Target Y1 Result
1 Number of key populations reached with individual and/or small group level HIV preventive interventions that are based on evidence and/or meet the minimum standards required. 5,210 1,756

(1,629 FSW; 127 MSM)

2 Number of individuals who received HTC services and received their test results. 275

(247 FSW, 28 MSM)

HIV positive: 78

3 Number of adults and children newly enrolled on antiretroviral therapy (ART) 1,014 6
4 Number of adults and children currently receiving ART 142 7

2. They connected, visualised and explained their indicators

Context is critical in reporting results to donors and, instead of standing alone, indicators should depict a chain of events and the relationship between them should be clearly presented. The list format used by NGO Africa was not achieving this.

Based on the project’s Theory of Change and input from relevant implementation staff, project activities were organised into a sequence of connected events with indicators or key results measuring achievement at each point in the sequence. The use of diagrams like the one below allowed them to clearly present the sequence and depict the scale of results. Overlaying narrative and key points to clarify results helped to provide a bigger picture of implementation performance.

Indicators 1

Have a look at our previous blog on thinking outside the data box to identify the key questions to ask when reviewing your indicators.

3. They used design principles in presenting their results

While a simple diagram in MS Word can be enough to communicate your results, additional graphic design can really improve how you communicate your implementation story. In this case, NGO Africa, was able to shift focus away from their underperformance and highlight important contextual details that explained their implementation challenges.  

Indicators 2

Although NGO Africa was not able to achieve its targets at the end of the project, it was able to change the way it reported on indicators and conveyed its implementation story. By putting their indicators into context, they helped the donor understand the challenges the NGO faced in achieving results. They were also able to demonstrate their plans for addressing these contextual challenges, showing that the NGO was engaged in organisational learning and development.

 * This is not the real NGO name; changed for confidentiality purposes.

Thinking outside the data box

Organisational learning is a process of detecting and correcting error and is a component of adaptive capacity and organisational effectiveness (Letts et al, 1999; Arygris, 1977). This process is also known as adaptive coping, which includes the “ability to detect complex patterns within events, identify causal relations between events, and intervene in events” (Van Tonder and Roodt, 2008).

Performance monitoring facilitates organisational learning and contributes to broader evaluative thinking (Taylor, 1998). Organisations with strong performance monitoring systems and structures, with teams that are able to interact with and learn from this data, are able achieve improved performance, efficiency, adaptability, and sustainability. However, many organisations do not use monitoring for this purpose. In NGOs, we and others (Gujiit, 2012) have encountered that the monitoring process is used more for accountability purposes than as an opportunity for reflection and learning.

In order for organisations to learn from M&E, learning has to occur at both the individual and collective level and be fed back into the system. This requires systems thinking and staff should be equipped with the necessary information, systems and processes to enable them to continuously question and review organisational processes. This process of questioning the underlying assumptions and objectives of a programme is known as double-loop learning.  

The experience of ABC, an organisation working to improve literacy levels amongst primary school learners, demonstrates the value of critical thinking in organisational learning.

ABC identified poor literacy teaching methods as a key reason for low levels of literacy amongst primary school learners. To address this problem, they ran a training and mentoring programme for teachers focused on improving their literacy teaching skills. However, regular reading and writing tests showed very little improvement in learner literacy. Thinking critically about their monitoring data, ABC project staff realised that learner assessments were were not a direct indicator of the efficacy of their intervention. Through research they found that external factors, such as the lack of school readiness amongst pupils and poor homework support from parents, acted as significant barriers to learning. Learning from these findings, ABC developed more relevant indicators to monitor teacher training success and also expanded their intervention to include pre-school teachers and parents.

However, many social development organisations we have worked with have pointed out the gap they face in thinking critically about data and using it in decision-making. Without working on how your staff think about data, efforts to set up strong systems are futile.

Here is an exercise we find helps to think outside the data box.

Reflecting on just one indicator, answer the following questions:

  • What does the indicator mean? Or what and why are we measuring it?
  • How does it fit our context?
  • What is the target and actual, over time?
  • How can this be explained?
  • What evidence supports this?
  • What is the story it tells us?
  • How does this relate to our Theory of Change?

Providing individuals and teams the opportunity to critique data in a similar way encourages a culture of learning within an organisation. Data Innovator provides training to M&E and other project teams on thinking critically about data. We recently ran one such training for an NGO using real-life scenarios and data. Giving staff the opportunity to step away from the usual M&E process and just think and talk about the data allowed them to connect the data with reality. The result was robust discussion between the M&E and programmatic teams about what the data meant and how they should act on it.

84GOP2OAKR.jpg

Look out for our next blog where we will look at an example of how one organisation used real data to better communicate the programmatic challenges they faced, and the solutions they set up to achieve this.

Other useful links:

References:

Argyris, C. (1977). Double-loop learning in organizations. Harvard Business Review: The Magazine, September 1977. 

Guijit, I. (2012). Accountability and learning: Exploring the myth of incompatibility between accountability and learning. In: Fowler, A. and Ubels, J. 2013. Capacity Development in Practice. Routeledge. Chapter 21.

Letts, C.W., Ryan, W.P., and Grossman, A. (1999). High performance non-profit organizations: Managing upstream for greater impact. John Wiley and Sons, Inc. New York.

Taylor, J. (1998). NGOs as learning organizations. Community Development Resource Association. 

van Tonder, C.L., and Roodt, G (ed). (2008). Organization development: Theory and practice. Van Schaik, Pretoria.