Usability testing
Usability testing provides direct feedback from real people (general public or specialized users) on their experience with your product or service. There are different types of usability testing, but the most common involves test participants being asked to complete specific tasks on existing or proposed web pages to measure task success and gather insights that can identify pain points and inform design decisions. At the end of this activity you should have a report on the quantitative and qualitative results for each task, as well as a list of issues that need to be addressed.
On this page
- When to use
- Who is involved
- How to do it
- Deliverables and artifacts
- Tools and templates
- Reference material
When to use
Usability testing is used to:
- measure task success (quantitative data)
- gather feedback from participants on anything that may be confusing or preventing them from completing the task (qualitative data)
- collect other related data such as user confidence, time on task and overall user impressions of the experience
In the context of a content optimization project, test with users at least twice:
- Baseline testing
-
- When you're improving existing content, baseline testing helps you understand what works and doesn't work before starting the design process
- Establishes a usability benchmark so that you can measure how much task success increases with an improved design
- Validation testing
-
- Tests whether the new design is an improvement over the old design
- Uses the same test plan as baseline testing (same composition of recruits and same questions)
- Identifies areas where the design still requires improvement
- In some cases, spot checks (subset of tasks, small number of users) may be conducted ahead of full validation testing to ensure that the design is on the right track and to identify additional improvements that could be made
Outside of a content optimization project, usability testing is also used:
- to regularly test top tasks to determine whether they need to be optimized
- when developing a new product
If you are considering usability testing, reach out to the Digital Design and Production Directorate to request a consultation with a ux researcher as early as possible.
Who is involved
- Lead
-
- Usually led by UX researchers to ensure accurate analysis of the results and to guide the product team in developing solutions
- For some types of testing, such as tree testing, it may be led by an information architect or content designer
- Others who may help
-
- External contractors may provide assistance for recruiting and/or testing
- Other project team members are encouraged to observe testing sessions to understand where and why users may be struggling
How to do it
There are 5 steps in this activity.
Before you start, make sure that you have identified the user tasks for improvement.
-
Determine the best approach for testing
Start by determining the best approach for testing based on the needs of your project and available resources.
This includes:
- What you are testing
Describe the product that you are focusing on for evaluation. This may include:
- The information architecture
- Mockups
- Existing web pages
- Prototype web pages
- Methodology
Describe the method of evaluation for the test, including:
- The type of usability test to be run (website testing, prototype testing, tree testing, etc.)
- Whether the testing will be moderated or unmoderated
- The expected duration of a test session
- Language
- Whether the test will be conducted in English, French, or both.
- Participants
Describe the individuals who will participate in the test, considering:
- The number of participants
- Test with users that reflect the target user profile and diversity of users of the content that is being tested
- Include any demographic considerations, such as:
- Specific professions
- Age ranges
- Geographic location
- Income ranges
- Proportion of English vs French participants (if applicable)
- Users with accessibility challenges and different devices
Note: The more specific that you are, the longer and more difficult it may be to recruit participants.
- Platform
- The service that will be used to conduct the test.
- Task scenarios / Questions
List the task scenarios and/or questions that the participants will be asked to complete. If applicable, for each task scenario or question, include:
- The starting page and target page with the correct answer
- Answer options (note the correct answer)
- Criteria for success
- Results
Describe what to expect after testing is completed, considering:
- How the test results will be evaluated
- What key performance indicators (KPIs) will be measured as part of the evaluation
- What form or artifact the scoring will take
-
Create testing materials
Prepare the materials that will be used in testing. If you require a very specific audience group for testing, you may want to start the next step (Recruit users) while doing this step.
This includes:
- Participant screening questions
- Pages/documents to be tested
- Instructions and scripts
- Note taking document
Compile these into a testing plan (use test plan template).
Translate the materials when you're ready to test in the second language.
-
Recruit users
Recruit users according to the participant profile outlined in the test plan.
If you are conducting unmoderated testing, you may be able to recruit users through the testing platform. For moderated testing or for specific target user needs for unmoderated testing, work through a contracted agency to recruit users (must go through a UX researcher in DDPD) or ask program areas if they can suggest contacts to help with recruitment.
It's important to note that the more specific or specialized audience you require or request, the more time it may take to recruit those users and get the desired number of completed and valid tests. This should be taken into account when planning timelines.
-
Conduct testing
The next step is to conduct the testing with users. Review the following for specific guidance for the type of testing that you have selected:
Before launching the testing, conduct a pilot test with 3 to 5 users (recommended). These sessions will help you to:
- Ensure all of your materials work as expected (easy for users to understand instructions, no technical problems)
- Ensure that the tasks address the challenges uncovered in discovery (if participants are able to easily and successfully complete the tasks, then you will need to consider if you're testing the right thing and make adjustments)
Based on the results of the pilot test sessions, refine the test plan and materials as needed.
Testing with users will usually take place over a period of a couple of weeks for unmoderated testing or longer for moderated testing. The more specific the target user profile is, the longer it could take to get the desired number of tests completed.
-
Prepare a report
Once the testing is complete, analyze the results and prepare a report with the testing results and key issues to be addressed in the design phase. Use the UX scorecard report template. The report should include:
- Summary of the goals, methodology and results of user testing
- System usability scale score and sub-scores
- Task-by-task results, including:
- Success rate
- Time on task
- Perceived ease of use
- Observed issues (identifying issues and illustrating with video clips, screen captures and data) and recommendations
- For validation testing: comparative results (baseline vs validation)
Reporting on tree testing can be completed fairly quickly and easily, while reporting on unmoderated or moderated usability testing will take longer, as you need to drill down into the issues that are observed.
Next steps
Once the report is complete, provide the results to the Data and Usability Analysis (DUA) team so that they can be added to the Usability Performance Dashboard.
Use the results of baseline testing to prioritize areas for improvement and to inform design decisions. In the case of validation testing, if the results do not meet the standard (80% task success rate or a 20% improvement in the task success rate), then further analysis of the initial issue may be required and the design solution may need to be adjusted based on user feedback gathered during testing.
Deliverables and artifacts
When you're done you should have:
- Test plan
- UX scorecard report
Tools and templates
Reference material
This activity is part of the:
- Date modified: