Usability Testing: What is usability testing?
Définition
Usability testing is an evaluation method that involves observing real users performing specific tasks on a digital product (website, application, prototype) to identify ergonomic issues and friction points.What is usability testing?
Usability testing is a user experience evaluation method that involves observing representative users performing specific tasks on a digital product, prototype or interactive mockup. The goal is to identify ergonomic issues, friction points and misunderstandings before the product goes into production or, in the case of an existing product, to guide its improvement.
Unlike an expert review or heuristic analysis, usability testing puts real users in front of the product under controlled conditions. It reveals problems that designers and developers, too familiar with the product, can no longer perceive. This is the "curse of knowledge" principle: what seems obvious to the project team is not necessarily obvious to the end user.
At Kern-IT, within the KERNWEB division, usability tests are integrated into every significant project. They are conducted on Figma prototypes before development and sometimes on the live site to validate continuous improvements.
Why usability testing matters
Usability testing is the only method that directly confronts design hypotheses with user reality. Its value is immense and well documented.
- Real problem detection: tests reveal usability issues that neither internal reviews nor analytics can detect. A button the entire team finds obvious may be invisible to an external user.
- Evidence-based decisions: tests replace opinion debates ("I think that", "I prefer") with factual observations ("three out of five participants did not find the button").
- High return on investment: Jakob Nielsen's research shows that five users are enough to identify roughly 80 % of usability problems. The cost-benefit ratio is extremely favourable.
- Continuous improvement: usability tests are not a one-off event. Integrated into a continuous improvement cycle, they measure the impact of changes and verify that they actually solve the identified problems.
How it works
A usability test follows a structured protocol that ensures the reliability and usefulness of the results. At Kern-IT, we distinguish several formats depending on the project context.
Moderated in-person testing is the richest format. A moderator guides the participant through tasks, observes reactions and asks clarifying questions. Sessions last 30 to 60 minutes and are recorded (with participant consent). This format is ideal for complex projects where subtle interactions matter.
Moderated remote testing uses a video-conferencing and screen-sharing tool. The participant interacts with the prototype or site from their own environment. This format has become predominant and offers the advantage of testing geographically dispersed users at a lower logistical cost.
Unmoderated testing uses specialised platforms that record sessions automatically. The participant completes tasks alone, at their own pace. This format is suited to quantitative tests with a large number of participants.
Regardless of format, a usability test always includes: a test protocol (scenarios and tasks), a pre-test questionnaire (participant profile), metrics (success rate, completion time, errors) and an analysis report (findings, severity, recommendations).
Concrete example
Kern-IT is designing an e-commerce portal for a Belgian chocolate brand. Before launching full development on Wagtail CMS, a high-fidelity prototype is created in Figma and submitted to a usability test with six participants.
The test comprises three tasks: find a gift box for under 30 euros, add a personalised greeting card and complete the order. Results reveal that four out of six participants cannot find the greeting card option, hidden in a basket sub-menu. Two participants abandon the checkout at the account creation step, frustrated by the requirement to create an account to order.
Recommendations are clear: integrate the greeting card option directly on the product page with a visible button and offer a guest checkout option. The prototype is modified, retested with three participants and both problems are resolved. Development can begin with confidence.
Implementation steps
- Define objectives: specify what the test should validate or invalidate. Which critical journeys need testing?
- Recruit participants: select five to eight people matching the defined personas. Absolutely avoid colleagues or acquaintances.
- Write the protocol: formulate realistic scenarios and precise tasks without steering the participant towards the solution.
- Prepare the environment: set up the Figma prototype, video-conferencing tool or test room. Test the setup before the first session.
- Conduct the sessions: observe without intervening, note behaviours and think-aloud comments. Ask open-ended questions at the end of each task.
- Analyse the results: compile metrics, classify problems by severity (critical, major, minor) and formulate actionable recommendations.
- Present and iterate: share the report with the team and client, modify the prototype or product and retest if needed.
Related technologies and tools
- Figma: the prototyping tool used by Kern-IT to create the interactive prototypes submitted to usability tests, with easy sharing via a link.
- Lookback / UserTesting: remote usability testing platforms enabling video recording, screen sharing and feedback collection.
- Wagtail CMS: the Django-based CMS on which improvements validated by tests are implemented in the production site.
Conclusion
Usability testing is the ultimate safeguard against flawed design assumptions. By confronting a digital product with real users, it reveals problems invisible from the inside and provides concrete data to solve them. At Kern-IT, the KERNWEB division views usability testing as essential quality assurance: it guarantees that every Wagtail site delivered truly meets its end users' expectations.
Five participants are enough for qualitative usability testing. Beyond that, the problems identified repeat without yielding new discoveries. At Kern-IT, we prefer two cycles of five tests with iteration in between rather than a single cycle of ten tests.