Beyond scores: qualitative evaluation tips for training providers

It’s easy to default to smile sheets, satisfaction ratings and Net Promoter Scores when measuring training success. They’re quick, clean, and give you a number to stick on a report. But what do they actually tell you?
If you’re serious about improving training quality, proving ROI, or just understanding what learners really get from your sessions, you need to dig deeper than the numbers. That’s where qualitative evaluation comes in.
Scores still have their place, but they only tell part of the story. Here’s how to add real depth to your evaluations, without piling on extra admin:
In this article, we’ll cover:
- What is qualitative data in training evaluation?
- Why does qualitative evaluation matter?
- Open questions
- Post-course interviews and voice notes
- Group feedback sessions
- Observing training sessions
- Analysing qualitative data
- Feeding it back into your training design and delivery
What is qualitative data in training evaluation?
Qualitative data is any non-numerical feedback that helps you understand how people experience your training, why they respond the way they do, and what the impact feels like in their own words.
Where quantitative data answers “how many?”, qualitative data answers:
-
“Why did that matter?”
-
“What changed as a result?”
-
“How could we make this better?”
It often comes in the form of:
-
Open-text survey responses
-
Interview or focus group transcripts
-
Observations from training sessions
-
Learner journals or post-course reflections
-
Voice notes or emails
-
Trainer notes and debrief comments
For example:
Quantitative | Qualitative |
---|---|
92% of learners said the course met their expectations | “The course was great, but I was hoping for more real-world examples I could apply to my job straight away.” |
Trainer rating: 4.7/5 | “The trainer was knowledgeable, but sometimes rushed through key points. I wanted more time for questions.” |
Used together, these two data types give you a fuller, more accurate picture of what your training is achieving and how to make it even better.
Why qualitative evaluation matters
Most training providers lean heavily on quantitative data and it’s easy to see why. It’s quick, measurable, and gives you numbers that are easy to track over time. Attendance rates, completion stats, satisfaction scores, Net Promoter Scores (NPS); they all tell you something.
But on their own, they might not tell you enough.
A course might score 4.8/5 consistently, but that doesn’t mean learners are applying what they’ve learned. It doesn’t tell you which parts resonated, what needs work, or how it felt for different types of learners. And if scores start to dip, numbers alone don’t tell you why.
That’s where qualitative evaluation comes in.
Qualitative data gives you:
-
Context – You don’t just know that someone was dissatisfied; you understand the reason behind it
-
Depth – You get detailed insight into learner experiences, barriers, and outcomes
-
Emotion – Learners often reveal how a session made them feel, what built their confidence, or what missed the mark
-
Direction – While a number might tell you something’s off, qualitative comments help you pinpoint what to fix or what to replicate
For example:
“The course was well-structured, but I really struggled to stay focused in the afternoon sessions. More breaks or shorter modules would’ve helped.”
That kind of insight doesn’t come from a 1–5 scale.
It also makes your impact reporting more meaningful. When you’re presenting to stakeholders, combining stats with learner quotes and stories brings your data to life. It shows how your training is making a difference, not just how it performed on paper.
In short: if you’re only using numbers, you’re missing half the picture. Qualitative evaluation fills the gaps, challenges assumptions, and gives you the detail you need to keep improving your offer.
1. Ask open questions (but keep them focused)
Open questions are the foundation of meaningful qualitative evaluation. They invite learners to express opinions, describe experiences, and reflect in their own words. But there’s a balance to strike—if your questions are too broad, you’ll either get vague responses or nothing at all.
The key is to keep your questions open-ended and purposeful.
Why it matters:
Generic prompts like “Any other feedback?” rarely give you anything useful. Learners are busy, they’ve just finished a session, and if your question doesn’t prompt specific reflection, they’ll often skip it—or drop in something surface-level like “It was fine”.
Instead, aim for questions that:
-
Nudge learners to reflect on a specific part of the experience
-
Make it easy to identify what worked and what didn’t
-
Help you spot patterns across responses
-
Are short, clear and not overloaded with jargon
Good open questions to use:
Here’s a list you can rotate through across sessions and course types:
-
What was the most useful part of this training for you—and why?
-
What could we improve in the delivery or content?
-
Was there anything unexpected that stood out to you?
-
What’s one thing you learned that you’ll apply in your day-to-day role?
-
How confident do you feel applying what you’ve learned—and what would help you feel more confident?
-
Was anything missing that you were expecting?
-
Did the pace and format suit your learning style? If not, what would have helped?
Each one gives you a specific lens to analyse the feedback from—and also shows learners that their input is taken seriously.
What to avoid:
-
Double-barrelled questions: e.g. “What did you enjoy, and how could we improve it?” — learners will usually only answer one part
-
Overly complex or jargon-heavy questions: e.g. “How effective was the pedagogical approach in relation to your learning outcomes?”
-
Vague prompts: e.g. “Any thoughts?” — it’s too open and gives no direction
Quick tip: use 1–2 open questions per survey
Don’t overload your learners with six open questions at the end of every course. You’ll dilute the quality of responses and reduce completion rates.
Instead:
-
Pick one or two focused, high-value questions per evaluation
-
Rotate different questions for different sessions or over time
-
Use conditional logic in digital surveys (e.g. “If the learner gave a low score, prompt for a reason”)
If you're using a TMS like accessplanit, you can build this logic into your post-course workflows so the right questions go to the right people at the right time; automatically.
2. Use post-course interviews or voice notes
Written feedback has its place, but sometimes it’s limited. Not everyone expresses themselves well in writing, and learners might hold back nuance, tone or emotion that’s easier to capture in conversation.
That’s where short interviews or voice notes come in. They give you a richer, more human perspective on how your training landed, especially for high-stakes or bespoke courses.
Why it works:
-
Learners can expand on points they wouldn’t take time to write
-
You pick up on tone, hesitation, enthusiasm; things you’d miss in text
-
It builds rapport, which can lead to more honest and reflective feedback
-
It’s particularly helpful for identifying organisational impact and longer-term behaviour change
How to keep it manageable:
You don’t need to interview every learner. This approach works best when you:
-
Select a small, representative sample (e.g. a few people per cohort, or a few per quarter)
-
Focus on high-value courses where deeper insight is needed (e.g. leadership training, customer-facing skills, onboarding programmes)
-
Schedule 10–15 minute calls max. Or invite learners to send a 1–2 minute voice note when convenient
-
Use the same 4–5 core questions so you can compare insights across sessions
Suggested interview questions:
-
What stood out to you most in the training, and why?
-
Was there anything that didn’t quite land for you?
-
Can you give an example of something you’ve done differently since the session?
-
Did the format or delivery style suit how you like to learn?
-
How has this course helped you in your role so far?
If you're short on time or resources, you can also ask trainers to record a short summary of informal conversations they’ve had with learners. This still counts as qualitative insight, especially if it's recurring feedback.
Top tip: Don’t over-script it
Interviews should feel like a conversation, not a questionnaire. Use open questions as prompts, then follow where the learner takes you. Some of the most useful insights come from unexpected directions.
Tools to make it easier:
-
Calendly or MS Bookings for scheduling
-
Zoom or MS Teams for quick video calls
-
WhatsApp or voice memo apps if you’re collecting informal feedback on the go
-
Otter.ai or similar for transcripts and analysis
If you're using a training management system like accessplanit, you can even schedule these check-ins as automated follow-ups based on course type or client, keeping everything streamlined.
Interviews and voice notes won’t replace your standard feedback process, but they’ll give you a deeper layer of insight when you need it. Use them strategically, and they’ll pay off in clearer improvements, stronger case studies, and better learner experiences.
3. Group feedback sessions or focus groups
Sometimes, the most useful insights don’t come from forms or one-to-one chats, but from hearing learners bounce off each other. Group feedback sessions (or focus groups) are a powerful way to dig into the collective experience of a training programme and uncover patterns you might otherwise miss.
They work especially well when you want to understand:
-
How a cohort experienced the training as a group
-
What stuck, what didn’t, and why
-
How the session played out in context (e.g. team dynamics, work culture, real-world application)
-
Reactions to new formats, content changes, or pilot sessions
Why group feedback works:
-
Learners often build on each other’s thoughts, triggering more detailed responses
-
You get a mix of perspectives in a short space of time
-
It helps surface feedback from quieter participants who may not engage in written forms
-
You can explore topics in more depth and follow up in real time
When to use them:
-
After delivering a pilot or new programme
-
When feedback is polarised and you want to understand the nuance
-
To support larger-scale evaluation projects or course reviews
-
As part of client account reviews or stakeholder feedback processes
How to run an effective group session:
-
Keep groups small (ideally 4–6 people) to give everyone space to speak
-
Keep it short and focused (30–45 minutes max)
-
Use a neutral facilitator (ideally not the course trainer) to avoid bias or defensiveness
-
Create a safe, open environment where feedback is welcomed without judgment
-
Make it clear this isn’t a test or a performance review. It’s about improving the learning experience!
Suggested prompts:
-
What worked well for you in this course?
-
What could have made the experience more valuable or relevant?
-
Were there moments that felt unclear, rushed, or unnecessary?
-
How well did the format suit the way you learn or work?
-
Have you applied anything you learned? What’s happened as a result?
You can run these sessions in person or virtually. Video calls work perfectly well for this, as long as the tech runs smoothly and the group is small enough to stay focused.
What to watch out for:
-
Dominant voices: Encourage balanced input to avoid one or two learners steering the discussion. Direct quieter participants with specific questions if needed.
-
Groupthink: Watch for agreement without challenge; people often go along with others in a group. Make space for dissenting views by explicitly inviting different perspectives.
-
Time creep: It’s easy to let these run over. Keep to a clear agenda and move things on if a topic is going in circles.
Capturing and using the insights:
-
Take notes. Or better, record the session (with permission)
-
Transcribe and theme the responses (what came up most often, what was surprising, what was actionable?)
-
Feed the findings into course reviews, trainer development, or client reporting
If you're delivering contracted or corporate training, group sessions can also double as relationship-building tools. They show learners (and their employers) that you're invested in making the experience work, not just delivering content.
Group feedback isn’t a fit for every session, but used strategically, it adds a layer of collective insight that written surveys can’t replicate. It’s one of the quickest ways to uncover hidden friction points, refine delivery, and find opportunities to elevate the learner experience.
4. Observe training delivery first-hand
When it comes to understanding what’s really happening in your sessions, nothing beats being in the room. Observing live delivery gives you insights that feedback forms can’t touch; how learners are responding in real time, whether the session flows, where engagement drops, and how different elements land across learning styles.
Done well, observation helps you spot:
-
Gaps between what’s planned and what’s delivered
-
Where learners disengage or struggle to keep up
-
Missed opportunities for interaction or clarification
-
The subtle things that make sessions feel engaging (or not)
-
How inclusive and adaptable the trainer is in practice
It’s also a useful way to validate or investigate patterns in feedback. If several learners say a course felt rushed or confusing, observing it live helps you see why.
Tips for effective observation:
-
Be clear about your role – You’re there to support quality and improve outcomes, not to critique performance for the sake of it. Let the trainer and learners know in advance that you’ll be observing.
-
Use a standard observation template – This helps you stay focused and ensures consistency across different sessions or trainers.
-
Don’t just watch the trainer – Watch how learners respond: body language, participation, note-taking, questions, energy levels.
-
Blend qualitative with quantitative – Make note of specific moments, quotes or behaviours. For example: “At 10:17, the group disengaged during the technical walkthrough; three participants switched to checking their phones.”
-
Follow up with the trainer – Share your observations constructively. This isn’t about fault-finding, it’s about building awareness and giving the trainer valuable external perspective.
What to look for:
-
Are instructions clear and accessible to all learning styles?
-
Is the trainer checking for understanding, not just delivering content?
-
Are visuals, activities, or discussions balanced? Does the session rely too heavily on one format?
-
How well is time being managed?
-
Are quieter learners being drawn into the conversation?
If you're observing online delivery:
-
Join with camera off and mic muted to reduce disruption
-
Keep an eye on chat activity, reactions, and use of interactive tools
-
Look out for long silent stretches or unclear transitions
Observation doesn’t have to be formal
It can be as simple as sitting in on a virtual or in-person session once a quarter and jotting down reflections. You can also invite peer observations between trainers, or even ask a senior learner to provide their own notes as a form of peer insight.
Some organisations also include short post-session debriefs between observer and trainer—10 minutes to share impressions, discuss anything that felt off, and highlight areas of strength.
Used carefully and constructively, observation brings depth and immediacy to your evaluation strategy. It shows you not just what learners say about a session, but what actually happens when the content hits the room.
5. Analyse qualitative data properly (without getting lost in it)
One of the biggest reasons qualitative evaluation gets pushed aside is because people think it’s too messy or time-consuming to analyse. You’ve got 200 comments, a stack of voice notes, and maybe a transcript or two. Where do you even start?
The good news: you don’t need to treat it like an academic research project. With a bit of structure, you can turn all that rich, open feedback into clear, actionable insight.
Step 1: Group similar responses into themes
Start by scanning your data for common threads. These could relate to:
-
Content – e.g. too much jargon, not enough real-world examples
-
Delivery – e.g. trainer pace, engagement levels, tech issues
-
Structure – e.g. session length, time for questions, clarity of objectives
-
Application – e.g. learners unsure how to use the knowledge
-
Environment – e.g. distractions, group dynamics, accessibility issues
You don’t need fancy tools to do this. A shared doc, spreadsheet, or even colour-coded sticky notes will do the job.
Tip: If you're dealing with lots of data, work in batches. Focus on 20–30 pieces of feedback at a time rather than trying to do everything in one go.
Step 2: Identify standout quotes or examples
Numbers are great, but stories stick.
Pull out comments that illustrate key points vividly. Whether it’s praise for a particular trainer or frustration about course relevance. These can be used in internal reviews, reports to stakeholders, or even case studies.
Examples:
“The trainer was clearly knowledgeable, but I struggled to relate the material to my job – more examples would’ve helped.”
“I loved the interactive parts. The breakout discussions were the only bit that really kept me focused.”
Look for comments that are emotionally charged, unusually detailed, or repeated across multiple learners; they often point to your biggest wins or pain points.
Step 3: Cross-reference with your quantitative data
Use qualitative feedback to explain or challenge your numbers.
-
If a session scored highly, do the comments support that or reveal hidden issues?
-
If scores dropped, what’s the root cause? Are there consistent complaints about timing, clarity, or delivery?
-
Did a particular cohort respond differently and if so, why?
You don’t want to cherry-pick data to suit a narrative, but you do want to dig into anything that seems surprising or unclear.
Step 4: Turn insight into action
Don’t let the feedback sit in a spreadsheet.
Once you’ve identified key themes, make decisions:
-
What needs to change right away?
-
What should be monitored or reviewed again later?
-
What’s working well and should be kept or scaled?
Example:
If several learners mention that the second half of a course feels rushed, that’s a red flag to revisit your timings or breakpoints. If people consistently praise hands-on activities, look for ways to build more of them into other sessions.
You can also use this data to inform:
-
Trainer development
-
Course updates
-
Sales conversations with clients
-
Internal CPD planning
Tools to help (if you're ready for them):
-
Excel/Google Sheets – great for simple theming and filtering
-
Notion or Trello – good for visual grouping and collaboration
-
Thematic analysis tools – like Dovetail or NVivo, if you’re handling a lot of qualitative feedback regularly
-
Training Management Systems – like accessplanit, where feedback can be centralised and connected directly to your courses and trainers
Analysing qualitative feedback doesn’t need to be a data minefield. With a consistent process and a focus on themes, you can turn comments into clear insights that drive smarter decisions, stronger delivery, and better learning outcomes without drowning in detail.
6. Feed it back into your design and delivery
Collecting qualitative feedback is only useful if it drives change. Too often, feedback is gathered, summarised, and filed away without ever making it back to the people designing and delivering the learning. That’s where the value gets lost.
To really benefit from qualitative evaluation, you need to close the loop. That means feeding what you learn directly back into course design, trainer development, and the wider learner experience.
Use themes to shape course updates
If you’re consistently hearing that learners want more real-world application or struggle with a particular module, that’s a sign to review your content structure or examples. Equally, if learners repeatedly praise a particular format or session flow, consider replicating that approach elsewhere.
For example:
-
Feedback about "information overload" might trigger you to space sessions out or break long sections into more manageable chunks.
-
Positive comments about collaborative tasks could lead you to add more group work into your blended delivery strategy.
-
Confusion around outcomes might prompt you to rewrite your course objectives or reinforce them during delivery.
Share insight with your trainers
Trainers are often the last to see post-course feedback; or they only get a filtered version of it. Make a habit of sharing anonymised qualitative insight with them directly, not just their scores.
This helps them:
-
Reflect on what’s working and what’s not
-
Make minor tweaks to how they deliver content in real time
-
Feel more connected to learner needs and preferences
-
Grow their own confidence and professional practice
You could also build this into your trainer development process e.g. include a short feedback review as part of regular 1:1s, peer reviews or team debriefs.
Involve your design team or subject matter experts
Your content creators, learning designers or subject experts will benefit from hearing real learner voices too. Sharing themed comments, direct quotes, or trends gives them a much clearer picture of how their material is being received and applied in real-world contexts.
You don’t need to do this for every course. Just focus on high-priority ones, new programmes, or anything that’s flagged consistently in feedback.
Communicate changes to your audience
When you make updates based on feedback, let your learners (and clients) know. This builds trust and shows you’re listening.
For example:
“You told us the practical sessions were the most valuable part of the course — so we’ve added more case studies and reduced the theory blocks.”
This simple act of communicating improvements based on qualitative insight strengthens your reputation, drives rebookings, and positions you as a provider who genuinely cares about outcomes.
Use findings to inform future planning
Finally, your qualitative insights can (and should) inform your strategic decisions: which programmes to expand, which audiences to target, which formats are most effective, and where to invest in new tech or trainer capacity.
Final thoughts
Qualitative feedback is more than just “nice to have” comments; it’s where the real insight lives. It’s what turns good training into great training. And it’s often the difference between a course that gets repeated... and one that gets results.
The takeaway? Insight is only useful if it changes something. Build a clear process for reviewing, sharing, and acting on qualitative data, and it’ll become one of the most powerful tools in your toolkit for improving delivery, delighting learners, and driving lasting impact.
Strong qualitative feedback doesn’t come from asking more questions. It comes from asking better ones. Keep them focused, purposeful, and learner-centred, and you’ll start to see insights that actually drive improvement.
If you want your training to drive real change and not just tick a box, look beyond the scores.
Want to stay up to date with what we're up to? Subscribe to our blog or follow us on Instagram!