There are several reasons why your contacts' Average Improvement along a particular Scale could be sub zero.
1. The Self-Perception Shift
What it is
When running courses designed to help people develop, it’s common to use surveys to track participants’ progress. However, these surveys can sometimes seem to suggest a decline in self-assessment over time which you’ll see on Makerble as a sub zero Average Improvement if it happens to enough of your participants.
Here’s an example. You run a parenting course and you have a question that asks participants at the start of the course: "How good a parent are you?" The participants, thinking that they’re already good parents, provide answers such as, "Great!" But after a few weeks on the course, over which time those participants have learnt a lot more about what good parenting actually looks like, those same participants rate themselves lower. It’s not that they’ve become worse parents, it’s because their self-perception has shifted.
This phenomenon is often referred to as the Dunning-Kruger effect and reflects an increase in self-awareness. At the beginning of a course, people may not fully understand what “great” looks like. As they learn more, they may realise their initial assessment was overly optimistic. This shouldn’t be seen as a sign of failure, rather this shift can indicate meaningful growth: participants are learning to evaluate themselves against higher, more accurate standards.
What you can do about it
To avoid misinterpreting these results and better capture true development, consider these strategies:
Include Objective Measures
Combine self-reported data with objective metrics, where possible. For example, track changes in specific parenting behaviours, like communication frequency or positive reinforcement, alongside self-perception.Acknowledge Changing Perspectives
Incorporate questions that recognise shifts in understanding, such as:
"How has your understanding of effective parenting changed since starting the course?"
This helps contextualise changes in self-assessment as part of a broader learning journey.Use Retrospective Pre-Post Questions
Ask participants to reflect on where they were at the start of the course, e.g.:
"Looking back, how would you rate your parenting skills when you started?"
This approach highlights their progress by allowing them to reassess their starting point with their new perspective.Invite Reflection
Instead of a raw self-assessment, phrase questions to highlight growth. For instance:
"How confident are you in your parenting skills compared to when you started the course?"
This helps participants anchor their responses to progress rather than static self-judgment.
In summary
When participants lower their self-assessments, it’s not a sign that they’re getting worse. Instead, it often reflects growing insight into what “good” really looks like. By designing survey questions that account for this shift, you can more effectively capture the progress that truly matters—the development of self-awareness, skills, and confidence.
Accurate tracking ensures your data reflects the real impact of your programme, celebrating participants’ growth in ways that inspire and motivate.
2. The mid-programme Slump
If you’ve ever reviewed survey data during the middle of a programme and noticed a decline in responses compared to the initial assessment, you’re not alone. Particularly for questions about mental health and wellbeing, these dips can feel disheartening—but they’re not necessarily a sign that things are going wrong. Progress, especially in areas tied to personal growth or emotional wellbeing, is rarely linear.
Why it happens
Mid-programme dips are natural and often reflect the complexity of personal progress. Here’s why they might occur:
Emotional Vulnerability
Programmes that tackle deep issues often surface emotions or challenges that participants may not have previously acknowledged. For example, a mental health programme may bring attention to stressors or unhelpful habits that people had ignored. While this is part of growth, it can feel overwhelming and temporarily impact wellbeing scores.The "Implementation Dip"
Borrowing a term from education, the "implementation dip" refers to the temporary challenges people face when applying new knowledge or skills. For example, adopting healthier coping mechanisms may feel awkward or frustrating at first, leading to perceived setbacks.Temporary Life Events
Declines in wellbeing might also reflect external factors, such as unexpected stressors in participants' lives. These are unrelated to the programme but can impact responses.Programme Fatigue
For some participants, the middle of a programme can feel like a slog—especially if they’ve invested effort but haven’t yet seen tangible results. This can lead to a temporary decline in motivation or optimism.
What you can do about it
To reassure yourself (and participants), here are ways to interpret and respond to these dips:
Normalise the Journey
Recognise that ups and downs are a normal part of personal growth. Share this with participants during the programme to help manage their expectations. Framing these dips as part of the process can reduce discouragement.Supplement Surveys with Qualitative Data
Pair quantitative responses with open-ended questions like, "What challenges are you facing right now?" or "What are you learning about yourself so far?" This can provide richer insights into why responses are declining and whether it’s part of a broader growth pattern.Track Long-Term Trends
Focus on overall trajectories rather than isolated data points. If scores dip mid-programme but improve by the end, the temporary decline isn’t necessarily a problem—it’s just part of the process.Provide Support During Dips
Use mid-programme surveys as an opportunity to check in with participants. For example, if mental health scores decline, it could signal a need for additional support, like a reminder of available resources or a session focused on self-care.
Should You Adjust Your Survey Design?
It’s worth considering whether your survey questions could better accommodate these natural fluctuations. Here are some options:
Ask About Change
Rather than asking about absolute states (e.g., "How happy are you right now?"), ask about changes relative to their baseline: "How has your wellbeing changed since the start of the programme?"Incorporate Expectation Management
Add questions that acknowledge the journey, such as:
"What challenges are you currently facing as part of this process?" or "What progress are you most proud of so far?"Introduce Reflective Questions
Retrospective questions encourage participants to reflect on their growth:
"Looking back to the start of the programme, how have your perspectives or experiences changed?"Be Cautious About Over-Adjusting
While some tweaks can help, you don’t want to obscure the real data by artificially smoothing over dips. Instead, use communication and interpretation strategies to contextualize fluctuations rather than trying to eliminate them.
In summary
Progress isn’t linear - and that’s okay. Survey responses that show declines mid-programme don’t necessarily mean something’s gone wrong. Instead, they often reflect the ups and downs of real personal growth, emotional reflection, or external life circumstances. By normalising these dips, supporting participants through them, and thoughtfully designing surveys, you can ensure that your data—and your programme—accurately reflects the transformative journeys people are on.
Ultimately, growth isn’t about a straight upward line. It’s about where participants end up by the finish.