10 Tips for Designing Better Test Cases

For most testers, designing, executing, and maintaining test cases are regular activities, this is especially true for new testers and those in the early stages of their career. This post is inspired by some documentation I worked on, in an effort to standardize the testing process in one of my previous jobs. Bear in mind that in each company the approach to handling test cases can differ significantly, old-school (Waterfall-like) enterprises might favor very long and detailed test cases, where each step required to execute the test cases is described in great detail, to more Agile environment where testing is moving at a faster pace, so test cases in Agile are shorter and more concise. There is a great course at the MoT on this topic called Optimising Manual Test Scripts For An Agile Environment, by Match Archer, the course is not too long and it's full of useful info, I'd highly recommend it.
test case meme
There is also a third alternative - no test cases at all! This has been a trend in the past few years in some places (usually product companies) where instead of writing test cases they do exploratory testing instead and often use automation as a way of documenting tests. Sometimes this is done using tools like Robot Framework (for more info on Roboto Framework, check out this course on TAU) which has its own domain-specific language which is more readable than the usual automation code, or sometimes Gherkin is implemented with automation, which has the same result - automated tests easily readably by humans, Serenity framework is on such example. Companies using this approach aren't that common and it's highly recommended for almost every test to have a good understanding of all that is involved when dealing with test cases. Below are a few tips on how to improve your test cases and hopefully make your life a bit easier.

Test cases meme
Read this meme backwords!

Know your Domain - Understand the System Under Test

We need to deeply understand the system under test, so our tests can bring concrete value, therefore learn the business logic and gain a clear understanding of the requirements - by analyzing the user stories (or what ever specification documents we have available) having knowledge transfers sessions with the domain experts, practice pair testing with more experienced testers from the project, etc. Personally, I'm not advocating automating the manual test cases, as this leads to duplicate work, instead automate to bring the most value, as test automation is demanding and expensive.

Have a Single Point of Failure

A test should ideally be checking one thing only, if at all possible (for UI tests this can’t be avoided for more complex scenarios - based on the real-world customer behavior). This applies to both manual tests and automated checks. Tests that only check one thing are more understandable, easier to diagnose/debug, and are more useful in reporting understandable defects - test steps are used as steps to reproduce in the bug report ticket.

Favor Shorter Tests Whenever Possible

It might initially require a bit more effort, but breaking long/complex test cases into many smaller ones will make our tests easier to understand and both faster and easier to execute. Non-testers will also have an easier time understanding why a particular test is failing when reproducing a related defect.  If you're familiar with coding, you can make a parallel with the Unix philosophy - a single function should only do one thing - just as a single test should validate a single part of the functionality, making it easier to maintain and also execute.

Leveraging Test Preconditions

With smart use of test precondition, we can avoid the need to add redundant steps to your tests - some of the typical preconditions would be: logged in, is registered, level of privilege (is the user admin for example), access to a certain environment, etc. Bassically preconditions tell us what we need in order to be able to execute a certain test.

Reduce Repetition with Test Parameters

Test Data Parametrization can help when we are dealing with a lot of input combinations in our test. For example: instead of writing a separate test case for each drop-down option, we can parametrize all the available options and have just one (parametrized) test case - saves us time and improves test readability. This can make our test data-diven, less repeatable and more relevant. Some test-managment tools, like Xray, have built-in support for test parametrization. While others can be edited to use test-parameters using custom fields - I've done this with Testrail. Test parameters are extremelly useful in automation as well, and lot of tools support this, like TestNG, NUnit, JUnit, etx.

Use Shared Steps to Make Future Maintenance Easier

If you notice that two or more tests are using the same steps, turn those test steps into Shared Steps, these can be reused by multiple tests, improving the readability and maintainability of tests and improving the speed of test creation in the long run. If a step that is shared changes you will only need to update it in one place - instead of having to update multiple test cases.

Use Consistent Naming Conventions

This can depend on the project, however, common sense advice would be to make your test names succinct and as descriptive as possible - just by reading the title the purpose of the test should be clear. Just by reading a title of a test case it should be evident what is the purpose behind it.

Group  your Tests Logically

Tests Suites (or test sets) can help us here, if you are validating a user story, the test case(s) you create for it should go into one test suite related to that user story, we can link them to each other in Jira, for example. Tests can also be grouped by test runs, by functionality - usually used in regression testing, smoke and sanity tests, etc.

Actual and Expected Results

When we are designing our test cases, we usually add expected result(s) on a test case basis, or for each test step, if these assumptions prove true the test passes. Once we execute the test case(s), we add the actual results as well, if these are different from the expected results we will generally report a defect for a failing test, after investigating why it failed. These should be clear and unambiguous, basically booleans - true or false.

Include Execution Evidence

Not always mandatory, but very nice to have, adding screenshots, or video recordings, of test execution, can provide proof of successful test execution to the stakeholders, or, even more importantly, evidence is even more crucial for failing tests, as it makes reproducing the issue much easier. Many test managment solutions allow us to report bugs direcly from test case execution, saving us some time - this way you have instant steps to reproduce a bug and evidence of the bug - screenshot, recordings, logs, etc.

Hopefully, these tips will help you write better test cases and make your life a bit easier, just have in mind that lot of best practices are context dependat and that you should not blindly implenet somthing just because it works for someone else, anlyze your needs and base your decisions on that. Practice trial and error when trying out new things and have in mind that proces improvement are (or should be) a never ending task, just don't let it become a burden. 

Thanks for reading!

Comments

  1. Very nice and detailed tips provided. Thankyou.

    ReplyDelete
  2. This article was curated as a part of the #51st Issue of Software Testing Notes Newsletter.
    https://softwaretestingnotes.substack.com/p/issue-51-software-testing-notes

    ReplyDelete
  3. nice one ! thanks for sharing :) :)

    ReplyDelete

Post a Comment

Popular posts from this blog

TestRigor - Review

How to Pass AZ-900 Azure Fundamentals Certification Exam