As teams and enterprises grow, it becomes important to have scalable testing processes in place. In particular, it is vital to have continuous testing throughout the software development lifecycle (SDLC). Testing should start in parallel with development; right from the requirements phase and should continue all the way till application deployment and production monitoring phase. This aligns with the “Shift Left Paradigm.”
In the recent years with the need for organizations to deliver applications at a high velocity, to stay competitive in the industry; teams have also started implementing the “DevOps” approach to SDLC. This means, there is an extra need for implementing automation in every step of the software development and testing process.
The above being the case, how are we going to implement continuous testing across the enterprise? This five step guide offers a strategic approach to implementing a scalable, continuous testing process in such an environment.
- What to test vs. what not to test:
This can be a contentious issue. In the unit testing world, the growing wisdom is that you test everything. EVERYTHING. And then, once you’ve tested everything inside the code, you test everything outside the code too. And then you test the tests. Heck, you’re even supposed to test the requirements to ensure they’re complete
In reality, this is a bit of a Rumsfeld situation. Donald Rumsfeld, former US secretary of defense, has some spectacularly practical quotes about operational military strategy, and one of those quotes applies here: “You don’t go to war with the army you want, you go to war with the army you have.” In testing, this means you have to pick your battles and your tests. Sure, you’re plan has an endgoal of “EVERYTHING” in capital letters with ten underlines, but in reality, you’re going to have to pick your testing targets of opportunity, and the amount of those targets you can meet by ship date is “The army you have.”
Therefore, it is critical to prioritize what to test along the way. The top of that list should be (and in this order):
- Your critical paths.
- How your business makes money?
- How your users use the application?
- How your application services are advertised?
What has been a problem in the past? With this list in hand, you can begin your prioritization. Your critical path and your business logic should be intimately tied together, and the tests for one should likely bleed into the other. Obviously, that’s not always going to be the case, but if that is not the case, you may not be thinking about these paths properly. Your line of business, and the flow of money through your system should generally be considered the most important aspects of your system, and the most sensitive. If that’s not the case, you’re likely in need of a different set of tips and tricks: something more holistic and architecture related.
For the line of business and the flow of revenue, your tests need to be comprehensive, include all regressions, and possibly even include the experimental and more technically challenging tests. These also need to include end-to-end tests of functionality, ensuring the flow of system data can be fully tracked from beginning to end, from revenue generation to bank deposit.
As the core business logic and data flow are vital to your business and cannot go down, this will also be an intense area of focus for your pipelines and continuous integration and deployment infrastructure. You’re going to want to test that pipeline, as well: are your sign-offs in place, do pull requests trigger change alerts and have processes that ensure only fully tested code is going to production.
With those basics in place, the obvious next step is testing the user’s lines of ingress and egress. Do the users’ paths through your systems resolve properly and in a timely fashion? This is your functional testing stage, and your interface testing period. Selenium and other such UI and browser tools fall in here, and can make your testers life very difficult. Maintaining a huge battery of UI tests, as the UI changes over time, can become a full-time process.
This is why the tests at this stage of the game need to be automated and self-healing, if possible. As your application evolves over time, so do the tests that are needed to ensure its stability. This is the real meat of your day-to-day testing work, and likely where most of the labor should be located. By the time you’re working full steam on the user layer, your underlying testing regimen for the core business logic also needs to be firmly in place and stable, as tracing bugs back down the stack from UI will become the norm, and having those lower level tests in place allows you to rule out the lower levels quicker when bugs pop up.
Moving up the stack, the next place your tests need to solidify is around your application services and their advertising systems. Your registries, repositories, and service mesh/discovery platforms are your switching stations, internally, allowing your digital trains to run on time, and when things go wrong in this layer it can be infuriating to untangle the mess of network traffic, XML, and HTTP that combine to obfuscate the data here. Solid testing at the network services layer allow you to sanity check the traffic going between your applications, and to be further sure of just where a bug is coming from.
Finally, within these layers, you’ll want to prioritize any bits that have been troublesome in the past. This is fairly obvious logic, but it’s surprising how many teams can get caught up in the religion of test driven development and view problem children as regular students in the classroom of development. If you’re testing everything, the logic goes, you’re going to cover those problem areas, anyway.
But the truth is that those problem areas likely need the extra care and feeding only your developers and testing team can give them just after an issue has risen, while the parts of the system they were working on are still in front of their mind. Taking the time after fixing a problem to add extra tests just to be sure you never regress can save your developers from having to relearn those systems later, when everything is on fire.
- Learn the 10 Rules for Writing Automated Tests
From Devops.com, we get an excellent list of 10 tips for writing automated tests, but we’ll summarize them here:
- Prioritize Ÿ Reduce, Recycle, Reuse
- Create Structured, Single-Purpose Tests
- Tests’ Initial State Should Always be Consistent
- Compose Complex Tests from Simple Steps
- Add Validation in Turnover Points
- No Sleep to Improve Stability (Sleep is the root of all evil in tests)
- Use a Minimum of Two Levels of Abstraction
- Reduce the Occurrences of Conditions
- Write Independent and Isolated Tests
You can dive deeper into each of these in the Devops.com article, where there is specific, prescriptive advice on each tip
- Solidify the Process, Firm up Lines of Communication
Having clearly defined lines of responsibility and communication are key to successful testing. First, you need to make it absolutely obvious to all of your team members which people are to write which tests. Is a developer writing this because its a unit test, or is a Q/A person writing it because it is a UI test? Who maintains this test over time? Who tests the tests? Who is responsible for running the test?
These may sound like obvious questions, but there is usually a grey area somewhere between the two, and it’s easy for older tests to fall into the abyss when their original authors have left the company. You’ll want clear lines all the way around each test, too: who submits changes to the tests, who is responsible for updating frameworks and libraries used? Who writes the issues in the ticketing systems? Who closes bugs?
Generally, this will come down to a choice between developers and testers, each with their own benefits and drawbacks. Having developers handle the writing of tests, for example, allows for faster feedback and generally better quality tests. It also results in developer ownership of quality. Having developers write tests, however, can eat up the precious time they need to ship a new feature. It also doesn’t help that developers generally don’t want to write tests, and can be hard to motivate to do so.
Using QA to write tests means your testers can fully implement test-driven-development, as they bring tests to bear on code as the source is being pushed through the pipelines. This does lengthen the feedback loop, however, and requires extremely well controlled lines of communication between developers and testers.
- Infrastructure: It’s all about the Technologies
Developing your testing regimen is definitely about process, but if you’re processes and pipelines are too rigid, you won’t be able to accommodate new technologies. And new technologies are the lifeblood of good testing.
A good QA team needs good tools to aid their overall testing effort. That means Jenkins for building out pipelines for builds and tests, CI/CD tools for tracking the metadata around each step of the process, and internal tools for developers to ensure they’ve got proper code coverage. Whether that’s TestNG, JUnit, or any other unit testing tool, the important thing here is standardizing on something that works, not necessarily picking one specific thing.
Every build and test process is going to be different, but there are some very fundamental rules to follow when working with testing technologies that will help bring about success. First, the initial state of any staging or testing environment will need to be the same every time they are used. Variation leads to uncertainty, and uncertainty is the last thing you need in testing.
Once everything is setup to run automatically, optimizations are in order. Your first take on a pipeline for build and test will likely include many areas where optimizations can take hold, from the included files, to the scrubbed data and to the network systems involved.
Finally, your testing environments and tools will function at their best when they are multi-tenant. You don’t want entire systems monopolized by single tests or single users. That’s inefficient, and wastes precious CPU cycles. Your goal should be to adopt technology which allows your build and test cycles to go as fast as possible, thus lowering the time it takes for your developers to get feedback
- Testing Skillsets: The People Matter
Everyone wants their developers to handle more testing, and no devops person wants to be caught up as the testing manager. Repurposing existing workers into testers can work in the short term, but long term, you’ve got to have the right people with the right skillsets to implement a fast, reliable, and automated testing process.
Devops as a practice has blurred the lines between developers and operations folks, but testing still requires a special set of talents that aren’t necessarily in the developer or operations toolbox. And yet, a good tester is generally a mix of operator and developer. That is to say, a good QA worker will be able to manage large numbers of systems in an automated fashion, and maintain the code behind tests themselves.
This is not a job that can be given to a developer or an operator, sadly, as the work requires both skillsets, and the additional skills around testing as a discipline. Don’t short-shrift your testing team. Instead, embrace it. Setup a testing center of excellence in your company, where any development team can bring their applications and consult on proper test batteries.
If you do have to repurpose existing developers or operations people into testers, bring them into the center of excellence as a team, where the multiple skills can blend into a single entity that behaves as a model for proper test and QA for the whole company.
QA is the perfect way to build a center for excellence, as the skills and work testers perform can be reapplied to other projects. Rather than requiring this center of excellence to test everyone’s applications, they can act as a sort of special operations team that can descend on a project, bring it into a CI/CD testing regimen, and teach the existing developers how to properly instrument, unit test, and integrate their applications.
Done right, such a center can help to spread those specialized testing skills to other teams, and to the grey area between developers and operations, so popularized by Devops.
About the Author
Shani Shoham is the President and COO for Testim.io, an up and coming test automation platform that uses machine learning to create self-healing stable tests. Prior to Testim, Shani managed business development for Perfecto Mobile, helping 2000 enterprises optimize their customers cloud base digital experiences. Shani is a serial entrepreneur that has brought six companies to market. He is an alumnus of the Stanford Graduate School of Business as well as the Technion.