Makerble Evaluations

Evaluating a programme from the outset

We begin evaluations by finding out the purpose of the research. Typically this falls into one or more of these categories:

  • Influencing policy: we identify what we need to prove in order to convince policy-making audiences and the data we need to collect that will serve as reliable evidence

  • Continuous learning: we identify outcomes which indicate the extent to which the programme is working and which provide insight into where improvements might need to be made once the programme is underway

  • Evaluation reports: we create a research plan that enables us to create a baseline report, interim report and end-of-programme report which demonstrate the difference made and the reasons driving the impact

Outcome Discovery is a process we use to determine the expected and unexpected outcomes that your programmes achieve. This involves looking internally - engaging with staff and volunteers - as well as externally by listening to beneficiaries to understand the impact from their perspective. We look for existing evidence of the long-term outcomes of your work and standardised outcome frameworks already used within your sector, e.g. The Outcome Frameworks and Shared Measures database created by The National Lottery Community Fund.

Screenshot 2019-10-15 at 13.02.01.png

The Process Of Change is one of the tools we use to make sense of the complex set of outcomes that often arise during the Outcome Discovery phase. By grouping outcomes into one of three types of change, we can accelerate agreement around the language we use to describe the difference your programmes make. We group outcomes into:

  • Changes in how people Think & Feel: i.e. outcomes related knowledge, attitudes, beliefs and internal capacity

  • Changes in what people Do: i.e. outcomes related to behaviour, habits, achievements and corporate or government policies

  • Changes in what people Have: i.e. outcomes related to wellbeing, quality of relationships, health and wealth

Screenshot 2019-10-15 at 13.09.40.png

Data Collection: With the set of outcomes confirmed, we drill down to a practical set of indicators which can be woven into existing data collection processes wherever possible and supplemented by additional data collection when needed.

This culminates in the creation of the baseline, interim and final evaluation reports required.

Continuous Learning: In addition to the summative evaluation, we use workshops and digital tools to make the evaluation formative so that your delivery teams have the opportunity to reflect on the learnings surfaced by the evaluation at regular intervals.

Screenshot 2019-11-15 at 21.45.36.png

To find out more about our approach to evaluation including Makerble Learning Boards, contact Matt Kepple who leads our evaluation practice. Email: [email protected]. Phone: +44 (0) 7950 421 815.

Evaluating a Collective Impact programme

Collective Impact programmes present unique challenges for evaluations due to the often complex arrangement of partners who are each producing data that could inform an evaluation. To get around this, we take a collaborative approach to planning the impact methodology which intentionally draws upon the insight and expertise of the respective partners. Our approach is anchored to the Five Pillars of Collective Impact. The fifth pillar, a strong backbone organisation, is typically the ‘lead’ organisation within the partnership which commissions Makerble as the evaluation partner in the first place.

Evaluating collective impact.png

Setting a common agenda is an essential first step to evaluating a Collective Impact programme. We use our proprietary Process Of Change™ framework in workshops with partners to enable everyone to agree on shared language that describes the outcomes being achieved at each stage of a person’s journey through your programme. This typically involves mapping out the ‘customer journey’ that a service user, participant or beneficiary might take through your programme.

Screenshot 2019-10-15 at 00.08.06.png

Agreeing to Shared Measurement requires partners to distil a pragmatic set of indicators which will be reported across the programme. These shared measures open the door to continuous improvement as partners are able to see their collective results and have open dialogue about factors which are acting as enablers and inhibitors to people’s progress through the programme. Part of the role Makerble plays when evaluating collective impact programmes is in providing the dashboards which enable shared measurement to happen and facilitating the conversations which enable continuous reflection, learning and iteration to take place.

discussion at desk makerble.png

To find out more about how we can conduct the evaluation of your Collective Impact programme, contact Matt Kepple via email on [email protected] or over the phone on +44 (0) 7950 421 815.

To find out more about the tools and techniques we use, visit our Building Blocks of Measurement.

Collective Impact Client Stories:

Evaluating a funder's impact

Most funders approach impact reporting by communicating these three things:

  • Case studies about grantees

  • Amounts spent, split by cause and/or location

  • Number of grants made, split by cause and/or by location

Whilst this is a good place to start, it only tells part of your story. We help funders go further to measure the difference that your grants have made.

We recognise that as a funder you have three levels of impact:

  • Aggregate impact: the combined outcomes of the projects you make grants to

  • Grants Plus impact: the difference that your support makes to your grantees’ projects beyond the financial benefit, for example, increased profile or improved beneficiary recruitment

  • Strategic impact: the extent to which you are achieving your mission as defined in your Theory of Change

At each level of impact there is the opportunity to report on the outputs and outcomes of your work. That said, decisions about impact measurement need to be mindful of the capacity of grantees so as not to burden them with reporting requirements which are disproportionate to the funding they receive. 

At Makerble we use our Impact Measurement Methodology to identify the minimal set of metrics you need to measure and devise practical ways to measure them.

Screenshot 2019-10-14 at 22.19.12.png

We can work with you across the full gamut of impact measurement or we can focus in on the one or two areas which are most important to you. Typically our work splits into these three phases.

  1. Outcomes Framework: identifying the outcomes and metrics you need to measure

  2. Data Collection: creating a data collection plan and either implementing it ourselves or supporting you to systemise it

  3. Reporting: analysing your data, producing reports and giving you access to interactive dashboards that show you your data in real-time so you can use it to enhance decision-making

Contact us to talk through your current approach to impact measurement and explore how you can take your foundation’s impact measurement to the next level.

Screenshot 2019-10-15 at 13.40.02.png