For many years software quality assurance (QA) teams tested software as a non-repeatable manual process. What they needed was a way to reliably replay an entire test suite, without human intervention, that quickly reports on the health of an application. With the introduction of software automated testing, organizations quickly adopted this revolutionary idea.

What is Automated Testing?
Automated testing is a technique where developers write a program to test the components of the system being built. These programs set up the data, invoke the features of the system and verify the results. Automated testing is useful in the early detection of potential bugs and also provides a regression suite when the software evolves.

Goals of Automated Testing
When writing tests, you attempt to achieve these goals

  • Reduced project risks
  • Improved software design and quality
  • A way to document the “System under Test”
  • A way to easily run the tests from the command line
  • Tests that are easy to write and maintain
  • Tests that require minimum maintenance as the software evolves around them.

Software quality is improved since we are writing software to compare the actual outcome to the expected outcome of a given feature. Automated tests reduce the amount of time QA needs to verify each feature, getting it into production sooner with less ceremony.

Automated tests mitigate the risks with complicated algorithms and logic when you can prove they work and continue to work as the software evolves by writing tests directly against them.

Having the ability to run your tests from the command line before each check-in, as part of your CI/CD (continue integration/continuous delivery), dramatically reduces integration issues on a multi-developer project.

The key when introducing automated testing is to do so with as little impact on the project budget and timelines as possible. Writing automated tests adds yet another codebase to maintain, so it must be done right to get the expected ROI fully. Many organizations have tried and failed as they could not keep the costs of maintaining these tests low enough to make it viable.

Economics of Automated Testing
Automated tests add time and cost to your software project. The cost of building and maintaining a suite of tests are offset by savings through reduced QA efforts, debugging and troubleshooting. With automated testing, you eliminate the need for developer debugging since they can simply just run the tests. Below shows a graph if the initial investment and where you will see a return on reduced effort going forward to recover those initial costs.

Image for post
reference: xunitpatterns.com

Upfront costs

It takes time to build up a suite of tests and the foundational code needed to support them. The costs of building this foundation up initially slows down the development team, but over time will pay off as the team gains knowledge and their velocity increases.

When done right, developing a suite of automated tests will reduce your overall costs, and help get your product to market sooner and with higher quality. With customer-facing sites and applications, the quality of your software has a direct impact on your corporate image.

Cost of maintaining tests

Now that you have invested in automated tests, you must keep them up to date, refactor them as the system evolves and add more tests when introducing new features. It is essential to have a good strategy in place to minimize your test code footprint, so your ongoing effort of maintaining these tests stay in line with your project costs.

Strategies to reduce the cost of ownership

Many strategies help keep the cost of maintaining tests low. Here we will discuss a few that I have found to be most effective.

  1. Testing Tools and Open Source Libraries

Use popular open source tools to write your unit tests (JUnit, NUnit, XUnit, PHPUnit). These tools are available for almost every language, and there is a large open-source community backing them.

2. Test Object Collaboration using Mocks or Stubs

Isolate the component under test by using mocking libraries to gain control over its dependencies. Not only does this allow you to test the component, but you can also verify the calls it makes to these dependencies and the values it uses for given scenarios.

3. Test Code Duplication

Create reusable Test Fixtures to reduce the amount of test setup across a series of unit tests and build custom assertion methods help to verify the outcome of the test. Use Test Object Factories to centralize object creation and enable reuse.

4. Self-documenting Code

Commenting your code is a topic that usually generates a lot of discussions. In my experience, I find comments in the software are not as well maintained as the software itself. It takes great discipline to keep the comments up to date as the code changes. When under pressure to resolve production issues, or get new features out quickly, updating the comments are often missed.

Use techniques like the fluent interface and there is no need to document your code further as the code itself acts as documentation.

@Test
public void acceptWithInvalidJwtToken() {
  JwtRuleContext invalidContext = ObjectMart.getInvalidJwtRuleContext();
  rule.accept(invalidContext);
  assertThat(invalidContext.isValid()).isFalse();
}

@Test
public void acceptWithValidJwtToken() {
  JwtRuleContext validContext = ObjectMart.getValidJwtRuleContext();
  rule.accept(validContext);
  assertThat(validContext.isValid()).isTrue();
}

The example above shows two tests for a rule within the Chain of Responsibility design pattern, commonly used for complex logic that would otherwise result in massively complex if…then..else structures.

The wording used for variables and test method names requires no further documentation. It is obvious even to a non-programmer the intent of these tests. Your production code also needs to follow this basic principle of conveying intent in simple language constructs whenever possible.

5. Design For Testability

Designing software is hard; designing for testability makes it harder. There are times when compromising the basic principles of good software design enables automated testing. A typical example of this would be breaking encapsulation principles to enable testing an otherwise private method.
The benefits and quality that results in having these tests offset the design compromise. With commercial software where quality, reliability and time to market are key performance indicators to success, these design principals are less important. The ability to write these tests already force a better design.

With design practices such as Domain Driven Design and Single-Class Responsibility, your components should be small enough that the need for this is the exception, not the rule.

I hope you’ve enjoyed these tips and can put them into practice on the projects or products you are building. If you’d like to talk more, or have ideas of other strategies I didn’t cover here ping me at greg@xerris.com

Image for post
www.xerris.com