The link between TDD and innovators
How writing tests for your software can get you closer to being an innovator
Created on 19 April 2023.
While reading "Learning to Build" by Bob Moesta, I came across this section, which I'll quote below, with the title "The Myth about Failure and Innovation":
There's a mantra in the world of innovation that says, "if you're not failing, then you're not innovating". But failure by itself is not a rite of passage to becoming an innovator. It's the learning that comes from failing that makes someone an innovator. If I leave a failure on the table without uncovering why, then it's a waste--making failure useful is the hard part of the innovation.
What I did after reading this bit was to create a connection with software design and development. Specifically with Test Driven Development (TDD).
This happened in two different ways. First is the classical approach of doing TDD when creating something new. You start by writing a test. This test will fail. Then you write the minimal amount of code that will make the test pass. Then you refactor. Rinse and repeat.
Sounds easy, but it takes a while to learn and adjust. Now, the first failure that you get, it's something you expect. You are writing a test for something that isn't there yet, so it makes sense to both fail and not necessary "learn" something from this.
However, while continuing this process you might eventually reach a failed status that you did not expect. And this where TDD is very valuable; and this quote makes some sense. You get to quickly understand why something is failing.
The second scenario where this quote applies is with existing, running in production software. Naturally, as people use your product, sooner or later they will encounter bugs. And report them.
A reported bug, or one found via other methods can be considered a failure. As such, approaching this with an innovator mindset, it is an opportunity to learn something. And to make your product more resilient.
How to deal with production bugs?
First, you establish the level of importance (how often does it show up, how many users are affected, does it touch components on the critical path etc). This helps in prioritizing and making sure you focus only on what's impactful.
Then, you analyze the system, the behavior, the bug itself - try to reproduce it. During this activity you might think of:
- How can I fix this?
- How can I prevent this?
- How can I be alerted sooner?
Rather than letting a paying customer tell you there's a bug in your system, what steps can you take to make sure you know before they do?
Finally, you can actually:
- Automate all the steps needed to debug the issue and understand if it's a problem or not.
- Implement automated test that confirm this bug. Then implement the fix, have all the tests as passed and deploy the changes.
- If applicable, implement an automated self-healing mechanism: a bug was automatically identified, and then the steps to fix it were automated as well.
This is how you learn. By doing this you prevent this particular bug from showing up in production again after being fixed. It reduces maintenance cost. And it is implemented at the integration level -- this means that it is not only you who will be using this method and it's benefits. It's the entire team.
So the safeguard system will help you, your team and your future team members as well.