Debugging using Automated Software Tests

Lots of developers think of debugging as step-by-step going through code, executing it line-by-line, and inspecting variables as they go along. However, there’s another way to get the same results with some additional benefits, which is to instead write automated software tests, which can either verify assumptions that the developer has about the code, or debunk them.

Whatever the reason for the debugging might be, there’s always some entrypoint that the developer chooses to start with. Instead of setting a breakpoint there and starting to test, instead they can write a test that calls that entrypoint. It’s either a function, an api endpoint, or anything similar. When debugging, we usually want to verify a few things. Examples might be:

  1. When I pass in 5, is the parameter to another function also 5, since it’s passed through?

    For this we could write a white-box test that compares the passed in parameter with the one received by the dependency.

  2. When I pass in this specific set of values, what is the result I get?

    You just write part of a test that passes in the values, execute it, and make sense of them. Afterwards it’s very easy to finish the tests by adding the assertions (the third part of the test) and make the test actually test something (this step is very important!).

Doing it like this allows increases test coverage, increases your understanding of the code, documents the code (which is common with all automated software tests, in this case even more so though, since you’re actually trying to make sense of the code).

By writing multiple tests on various levels of granularity, the debugging session can be very insightful, and by doing very targeted testing of either bigger systems (using E2E or integration tests), or smaller units, it’s very easy to pinpoint the exact part of the code that is causing problems, or responsible for what you’re looking for.

1 Backlinks   backlinks