What is it?
User acceptance testing is precisely as it sounds when the three words of this term are broken apart: our intention is for the user of an app or feature to verify, through testing, that the feature has been designed and functions as expected, or a bug has been addressed correctly. When an application or feature is being requested by a client or customer, this kind of testing can be used so that the intended audience has a test plan to agree that this has been done to their satisfaction.
Who does this testing?
Typically, this is testing that we would want an end-user to complete. In some cases, it may be a client or stakeholder who has requested a new feature. It could also be a fix to a workflow or bug. Ideally, this is a test plan that is intended for someone outside of the development team to complete, which both verifies that the work is to the requestor’s expectations, and also provides a fresh outside perspective to the app.
How should I make a test plan?
My key advice for any test plan creator is to make the following as clear and concise as possible:
- Are there any assumptions about the test scenario before the tester starts it? A great example would be if you expect that the tester is already a logged in user or not for a web app. Which expectation you have should be noted, or note if it’s not relevant. Anything that could impact the state in which a test starts should be considered so your tester can start out on the right foot.
- What are all of the steps to take in a scenario? This complexity may not always be present, but it helps if there are multiple approaches that could be taken within an app to complete a scenario. Adding a new user, for example, could have multiple routes (self sign up vs. an administrative page) to achieve the same end in an app, but perhaps only one route was adjusted through development. You want to be explicit in what path you want the user to take, and if there are alternative paths that should also be tested.
- At each step in the test scenario, what is expected? This is key to determining that your own pass-through check on a feature aligns with the user acceptance test plan that you are about to hand off to someone else. Noting these expectations at each step will make it easier for a tester to determine when something unexpected has occurred.
- Did the expected behavior match what actually happened? Note what was identical and what was different. This is key for reproducing problematic results.
Example Test Plan Template
Scenario Name |
Step # |
Instructions per Step |
Expected Result |
Actual Result |
Pass/Fail |
Notes |
|
|
|
|
|
|
|
|
|
|
|
|
|
|