By now, you’ve probably heard us talk about Continuous Performance Testing and Monitoring quite a bit. There’s a reason for that. Sure, it’s a process enabling IT teams to produce faster applications, deliver new features/enhancements in less time. It’s also a tool with the ability to help simplify interactions across Dev, QA, Ops, and the business. To us, it’s a movement. One that allows IT departments to keep pace with today’s software delivery pace acceleration reality. Agile and DevOps have become part of the solution.
A primary characteristic of Continuous Delivery is delivering often and early. In order to achieve these goals, teams must constantly learn and adapt throughout the entire cycle. The role of testing is present throughout the process, and integrating automation wherever possible is imperative. Neotys recently sponsored a webinar entitled, “Automating the Full Performance Testing Cycle to Speed Up Release Time,” addressing this. For the full story, click here.
If you’re not able to dive in to experience the full webinar’s content, I’m happy to share some takeaways that you can share with your team:
- User experience is a crucial focal point, and should dictate how you design and execute your performance testing.
- While the concept won’t surprise you, it’s important to reiterate the value and impact of the user experience. Your users expect a seamless experience whenever/wherever they want. And, while page load speed is important, for instance, it’s when they cannot connect with you that will be remembered. As if it’s not bad enough that users have at every fingertip a forum to voice their opinions, having a poor experience will deliver a one-two punch to both brand and bottom line.
- A little dated, but IDC’s 2014 report underscores this stark truth – “… hourly downtime costs can range from $1.25 to $2.5 billion for a Fortune 1000 firm, and the average cost of a critical application failure is $500,000 to $1 million per hour.” Devops.com responded with a good read, providing a link to the report.
- While the concept won’t surprise you, it’s important to reiterate the value and impact of the user experience. Your users expect a seamless experience whenever/wherever they want. And, while page load speed is important, for instance, it’s when they cannot connect with you that will be remembered. As if it’s not bad enough that users have at every fingertip a forum to voice their opinions, having a poor experience will deliver a one-two punch to both brand and bottom line.
- Neotys’ webinar is prefaced with an unexpected turn before its eventual dive into performance testing, reminiscing a little bit about what it used to be like. Drawing comparison to the old video game, “Street Fighter.” The connection – “bonus stage,” in its break from the game’s action, as wild west example of performance testing’s historical place in IT.
- For those of you who remember playing the game, you may recall the mindless repetition of kicks to the car to bolster your point total. If I understood the message correctly, the performance testing of yesterday is similar to the unstructured, undisciplined behavior required to master the video game.
- The comparison may/not hit home, especially if you’ve never experienced the “bonus stage.” What should resonate is that unlike the video game to which it’s previous practices are compared, modern load testing enables full understanding of the application’s behavior so that effective, comprehensive load tests can be conducted.
- To illustrate this notion, the presenter likens performance test design with old-school bridge building – summarizing that teams are becoming increasingly focused both on development of the structure to enable max weight capacity support, and one which also can remain unaffected by severe wind, rain, and snow.
Okay, so the game and the bridge might not do it for you. You’re probably also asking what’s really in it for “me,” especially as you consider your own current performance testing practices.
- Performance testing is time-consuming, requires manual intervention.
- Software delivery continues to accelerate, and those who deliver fastest, most qualitatively – win.
- Performance Testers/Engineers need automation at every step of the cycle to test faster, continuously.
- Automated testing is an integral part of the continuous delivery pipeline, but most still using outdated manual testing processes. The initial effort required to set-up an automated testing process makes teams want to avoid the pain, making do with manual testing.
- To truly benefit from automated test integration, it’s important to use the right tool, framework, and, technical approach. Having the right approach involves knowing which tests to automate, and which to continue manually.
- Many of our customers (ourselves included), will leverage the combined open source solutions of Selenium WebDriver with Jenkins, in order to run full end-to-end/UI automation testing.
- Always be analyzing your risk, accounting for this during your design build (following the end-to-end user journey).
- When testing a new application scenario, map to the purpose, relating it to the original business plan; Test scenarios for existing applications provide great historical value if you let the logs be your guide.
As you think about where you go from here, let’s not lose sight of the user experience, and its impact on your business. Know that while the pace to deliver is not getting any slower, it’s your embrace of delivering continuously that is critical to keep your competitive advantage. Integrating smart, continuous automation into your performance testing process will regularly speed up the release times to help maintain this edge.
To see the webinar, click here.