As the year draws to a close, we’ve also reached the closing stage in our Design Thinking series. We’ve already walked in our users’ shoes, defined the problem, explored creative solutions and prototyped a promising idea. You can find blog links on these areas at the end of this blog.
And now comes the final stage and the moment of truth - Testing.
The prototypes created in the previous stage can inform us as to what might work and what might not. Testing is where we try to produce evidence to understand how our prototypes land. We learn what works, not based on theory, but by gathering evidence first hand from the real world. Crucially at this stage we must remain open to change. This is important, because testing often reveals where our ideas just don’t work.
Anyone who has been involved in a "talking out loud" usability test will have watched in horror as their carefully considered designs fail in front of their eyes. The moment where the user is fumbling about a screen and narrating… "OK, so I’m wanting to send this now, but I can’t really see how"… whist your inner monologue is shouting… "Just click that big red button that says "Send!" It’s obviously right there in front of you!". But no, it’s just not obvious enough. This doesn’t mean your design has failed (as we’re only testing a prototype) but it's an important opportunity to gather feedback.
Testing is not the end
In traditional delivery models, testing often comes right at the end of the process, after the product is built, the communications are all signed off, and the deployment process is finalised. But, in Design Thinking, testing is part of an iterative cycle that’s continuously repeated. It’s a vital part of how you develop a solution, ensuring it meets users’ real needs and expectations before being shared widely.
The framing here is nuanced, but important: unlike more traditional forms of testing, here you’re not attempting to validate success. The aim isn't to prove that the implementation conforms with a pre-produced design specification. You’re bringing the testing process forward to make it an active part of the design process, gathering insight as you go.
Known as ‘shift-left testing’ in software circles, and rather than validating compliance, you’re essentially asking:
- Does this meet the needs we identified?
- Where are people getting stuck?
- What’s confusing, unnecessary or missing?
How to test in the LGPS context
As a result, testing doesn’t need to mean a lengthy evaluation based on a heavyweight engineering process. Small-scale, informal testing can often give signals as to where design assumptions may be incorrect of valid.
For example, you could:
- Observe staff using a new administration tool to spot usability issues.
- Ask scheme members to complete a mock retirement journey using your prototype and talk through what they’re thinking at each step.
- Tweak a small documentation change (eg to onboarding or FAQs) with frontline staff and review the impact of the change with a small group before rolling it out further.
- If you’re working with a large enough audience (I’d suggest > 1,000 members), you could even try simple 'A/B split testing' where you send two different email formats to two halves of the test group to see which format members engage with more.
The key is to design your test to learn something specific… not just to confirm what you hope is true. Think back to the point at the start of this blog (the potential for the crushing disappointment of a design that doesn’t work as you wanted), mindset is important here. As one of my early-career mentors repeatedly emphasised:
Feedback is a gift
When someone points out something that’s confusing or frustrating, that’s not a setback - it’s a shortcut to improvement. The more honest the feedback, the stronger your ultimate solution will be.
It’s best to invite feedback early and show people that their views are useful and can make a positive difference. Especially in public sector environments, where change can feel imposed or the field can seem opaque due to its complexity. Co-creation builds shared understanding and trust.
What happens next?
Of course, not every design ultimately gets the green light. Whilst some ideas need some tweaking, which is supported by this iterative test-learn cycle, others might need a total “back to the drawing board” rethink.
That’s OK. That’s part of the process.
Design Thinking isn’t about getting it right the first time. It’s about getting it right over time… with less waste, more insight and greater confidence.
Testing is the opportunity to gain much of that insight.
The end?
Thank you very much for taking the time to read this. I hope you enjoyed this series and found it useful and - of course - I’d love to hear your feedback too.
If you’ve followed the series from the start, we’ve now covered each of the five stages of the Design Thinking process:
- Start with empathy – Listen, observe and understand the real needs of users.
- Define the problem – Clarify the problem you’re attempting to solve.
- Ideation – Generate many creative ideas.
- Prototype – Bring ideas to life simply and quickly.
- Testing – covered in this blog - learn what works and what to improve
Once again, at the risk of labouring the point, Design Thinking is an iterative cycle - not a “once and done” project or initiative. You can use this cycle to continually improve your processes, systems and workflows. It promotes a mindset, rather than being simply a method.
It brings a level of structure to creativity, often viewed with an almost magical view outside the realms of data and science. But it’s this structure that helps to generate and evaluate ideas quickly. It provides a framework and tools which bring a degree of intellectual rigour to empathy - again something often seen as a “soft“ skill, beyond analysis. As a result of these, it supports learning what works at lower risk, more quickly than more traditional approaches. Perhaps most importantly, it helps you make decisions that (rather than being the result of arbitrary opinion) are rooted in the views of the people they directly affect.
The approach doesn’t require a huge budget or a team of specialists. It requires curiosity, commitment, and a little bit of being open-minded and brave enough to try, test, listen to feedback and learn from that feedback. In the end, if you’re looking for better outcomes, whether for members, employers, funds or councils, these don’t just happen by chance.
Whether by accident or conscious thought, they’re the result of design.
If you have any questions on anything covered in this blog or on the Design Thinking process, please get in touch.
Important information
This blog is based upon our understanding of events as at the date of publication. It is a general summary of topical matters and should not be regarded as financial advice. It should not be considered a substitute for professional advice on specific circumstances and objectives. Where this blog refers to legal matters please note that Hymans Robertson LLP is not qualified to provide legal opinion and therefore you may wish to obtain independent legal advice to consider any relevant law and/or regulation. Please read our Terms of Use - Hymans Robertson.