Quantcast
Channel: Neotys Blog
Viewing all 74 articles
Browse latest View live

How to Scale your Load Tests with NeoLoad

$
0
0

You’ve got complex load tests coming up with a deadline. Your Virtual User (VU) load is significantly higher than anything you’ve tested before. As you prepare your scripts, you’re probably also thinking about Load Generator requirements – differentiation between them, their capacity and how many needed.

No manager or company wants to suffer the consequences of wasted time and budget as a result of failed testing due to unused or missing resources. The following is a guide to help you understand and discover scaling your resources to meet load testing demand using NeoLoad.

Load Generator Capacity Planning

Once the capacity for a single Load Generator is known, you can then extrapolate the number of total Load Generators (LG) will be needed to meet load tests demand. For example, if the internal machines you’re using for LGs have 1,000 Virtual User capacity based on current scripts and configuration, and you need to run a 10,000 VU load test, you will need 10 LG machines. Ultimately, addressing what each of your LGs is capable of handling is the key.

When figuring out LG capacity and performance, you should keep some core factors in mind:

  • Physical memory
  • Allocated heap space to the Java Virtual Machine (JVM)
  • CPU capability and bandwidth (local hardware, and network between)
  • Storage location, complexity of design, protocols used
  • Kerberos authentication (massive memory overhead), and XPATH extractions

Having a Quad-core Processor (or better), rather than dual-core, is a must for heavy load testing today. Having 12-16 GB of physical RAM, and allocating one-fourth to one-half heap space for the JVM is recommended. Having Gigabit network cards (NIC) for bandwidth will alleviate LG communication and traffic bottlenecks. It’s critical to note that script design can have an adverse impact on LG load capacity. Refrain from using excessive Response Validations, avoid Kerberos authentication if possible (can stunt load capacity as it’s known to max out at 50 VU/LG), and watch out for redundant loops in transactions.

Load Generator Capability

Once you know the hardware or Cloud Services you plan to use for load testing, you need to determine the capability of a single LG with your application and scripts. Best practice approach would be to create a load test using one LG/script(s) combination involved, configuring it to ramp-up at a specific rate. Then, watch your load tests and monitor how long it takes for the LG to run its load. Make a note of Virtual User load at that time, and remember its limit. It is good practice to take 5-10% off of this threshold to account for a marginal safe zone (to configure the rest of your LGs’ capacity). It is not wise to run your LG’s at maximum load, hovering at the maximum limits. A load of 80-85% of any specific resource limit in your test is ideal. For example, if your load ramps up to 700 VU while your memory is at 100% utilization, review what your load was between 80-85% memory utilization. It could be in the ballpark of 575-600 VUs, thus establishing your planning point. If 600 VU is your mark that puts your LG at 85% of memory utilization, you can now safely and confidently identify total LG count for your full load. In this example, if your load totaled 5,000 VU, you would plan for a minimum of 9 LGs (5,000 VU / 600 VU per LG = 8.33, or 9). Consider the following ramp-up test configuration example:

As part of the set-up, you would start with a set number of Virtual Users (this case, 100 used), adding the number of users over time or completed iterations. The above also displays the addition of 10 Virtual Users every 30 seconds. Your particular scenario might necessitate a more aggressive ramp-up (E.g., 400 VUs initially, with perhaps up to 50 VUs introduced every 30 seconds). It depends on where you think the load will hit peak capacity out, causing a bottleneck; what kind of hardware will backbone your LGs.

Resource Utilization

After running load tests that ramped up the load over time, you’re going to see resource usage. See the following high usage example due to memory:

You can tell in the above that as the memory usage nears 85% (eventually near 90% and above), the load is at 155 VUs – the limit for this LG (based on existing scripts and configuration). Use this to calculate machine total. In this case, it would take 10 LG machines to run a load test for 1,500 Virtual Users (note that an LG may have an archetypal capacity for 250-5,000 VUs per machine (in some cases even more), depending on many factors).

There are other options which could increase the LG capacity. More significantly, you could improve overall system memory or heap space memory allocated to the Java Virtual Machine. In this scenario and configuration, perhaps only one-fourth of the system memory of 8 GB (default setting) might be assigned, setting it at 2 GB of heap space for the Java process running the load on this machine (Oracle JVM is limited to 1.5 GB of heap space if using the 32-bit version). By doubling this memory allocation, you could see a similar or more lift in machine capacity, as objects in memory are shared and re-used. Another significant consideration is to make sure you have the maximum processing power available.

There is also an inherent danger of using too much heap space allocation for the JVM – more doesn’t always mean better. Having greater system memory than what is needed can potentially degrade performance. The Java heap is where the objects of a Java program live – E.g., Java process for the LG. It is a repository for live and dead objects, free memory. When an object can no longer be obtained from any pointer in the java process, it’s marked as “garbage” and ready for collection. The JVM heap size determines how often/long the VM spends collecting garbage. If you set a large heap size, full garbage collection is slower but occurs less frequently. If you set your heap size by your memory needs, full garbage collection is faster and more frequent. The heap size tuning goal is to minimize JVM garbage collection while maximizing concurrent Virtual User volume during the load test on each LG. An error indication with heap space memory would have the following log association “java.lang.OutOfMemoryError.

In Conclusion

To resource plan, you first need to determine Load Generator capacity. Remember to run a ramp up test on one of the LG machines to establish its maximum size; scale back to 80-85% of the max load, using this mark to calculate your total number of LGs required for your load testing. Pay attention to the factors that affect LG capacity. Have your desired outcomes in mind from the beginning, putting good design practice into scripting and resource configuration. Keep your designs as simple as possible. Plan for your resource needs, including hardware. Configure these resources to maximize their support capacity and delivery.

Learn More about Scaling Load Tests with NeoLoad

Discover more load testing and performance testing content on the Neotys Resources pages, or download the latest version of NeoLoad and start testing today.


Why Enterprises Look for Professional Load Testing Tools

$
0
0

Not all load testing tools are created equally, even if they provide the same type of service and belong to the same product category. And, not all products accommodate the enterprise-grade solution required to ensure these applications will be fast and reliable when deployed into production, for each new release. Organizations whose business is supported by critical applications need professional load testing tools and solutions. What are companies including in their evaluation criteria?

Load Testing Tool Requirements

  • Supports all technologies (legacy and current) used to develop applications
  • Simulates realistic user behavior, especially under the most complex scenarios
  • Scales load tests
  • Enables continuous testing
  • Integrates with the continuous delivery pipeline

CSS Insurance, a leading health insurance company in Switzerland has selected NeoLoad because it was able to meet their enterprise-grade requirements, supporting the complex performance testing requirements of their critical applications such as Java/Hessian-based CRM system, CSS Customer portal, and invoicing system.

Frank Lepper, Test Engineer at CSS Insurance comments “NeoLoad proved to be the only solution matching all our testing requirements, including the complex scenarios. With NeoLoad we can run and maintain 30+ Testcases on a regular base with 1 FTE.”

For the full story of CSS Insurance’s evolution of NeoLoad, read the case study. Or, to start testing with NeoLoad yourself, click here.

You Need On-premises Deployment – Now What?

$
0
0

NeoLoad Web embodies the future of NeoLoad’s performance testing platform. It is designed to become the centralized and shared point from where performance testing teams will start their tests as well as analyze and access test results.

With NeoLoad Web, both SaaS and standard Docker container-based, on-premises deployment options are available. While SaaS is the quickest and easiest way to get started, on-premises deployment helps customers who may be concerned about security or other concerns associated with SaaS.

What’s the Difference?

Often a SaaS solution does not provide the data integrity (localization) that may be required and otherwise harnessed by the on-premises alternative. Choosing the right deployment model requires considering different varying aspects, including upgrade cycles.

Several factors impact the decision on whether SaaS will provide more of an advantage. With the consistent rise of SaaS, this is a decision several businesses are in the process of making.

Consider NeoLoad Web. Deploying the SaaS model is always going to be relatively faster as it leverages a ready-made platform which has already been provisioned, implemented, and tested.

It goes without saying that the on-premises version of software requires infrastructure which can absorb time, personnel, and equipment to set up the new environment. Likewise, additional hardware and software purchases may be needed. In contrast, support and maintenance associated with a SaaS solution provide minimal IT dependency. Involvement is limited to customizations and design review. It takes care of infrastructure risks to ensure high-availability and disaster recovery. Updates are also provided automatically. However, some control is relinquished as it is needed to entrust the data to Neotys.

With our NeoLoad Web on-premises installation, you are responsible for maintaining the application, its updates. You’re also responsible for data availability and disaster recovery. NeoLoad docker image updates is another task you’ll need to own. If there is a primary reason to select this model, it’s the fact that it provides full control over your data, minimizing security risks.

Let’s get into the weeds and discuss NeoLoad Web on-premises installation.

NeoLoad Web On-prem – Step-by-step

  1. First, make sure you have read through and can confirm that you possess the necessary hardware and the software requirements.
  2. Based on the location of the install (standard or SSL), download the appropriate copy of the installation file respectively: docker-compose-all-in-one.yaml or docker-compose-all-in-one-ssl.yaml. Place this into your Tomcat/Linux/Ubuntu directory.

NOTE:  You can copy directly to your Unix box using ‘wget’ as:

https://www.neotys.com/documents/download/neoload-web/docker-compose-all-in-one.yaml

  1. Run the compose file using the following command: > pseudo docker-compose –file docker-compose-all-in-one.yaml up –remove-orphans
  2. Access the Web interface via browser at http://myMachine:80
  3. Note that when creating your Admin user credentials, you will be asked to provide the outputs shown during the installation. The section with the admin password looks like this:

  1. Using NeoLoad Web:
    Running a test on your NeoLoad Controller:

More About NeoLoad Web

Discover more load testing and performance testing content on the Neotys Resources pages, or download the latest version of NeoLoad (including NeoLoad Web) and start testing today.

#NeotysPAC – Performance Testing in a DevOps world: don’t spin in the middle, by Stijn Schepers

$
0
0

[By Stijn Schepers, Accenture]

I was honoured when Neotys asked me to present at the first Performance Advisory Council in Scotland.  I do not often have the opportunity to meet world-class performance engineers to exchange ideas and to talk about the future of our profession. So a big thank you to Neotys who organized a unique event at a superb location. Borthwick Castle, a medieval castle built in 1430! A place where whiskey tastes even better than anywhere else in the world! This must be the perfect setting to talk about the future of Performance Engineering.

My presentation was about Performance Testing in a DevOps world: Don´t spin in the middle. This was a personal story about how DevOps has transformed my professional life in a positive way. DevOps has brought a lot of change to the way we work; when you embrace this change and you use this changing landscape as an opportunity to learn and grow, you can become a trendsetter of your future profession!

Waterfall Approach

The majority of my career as a performance engineer, I was involved in huge programs of work which were delivered based on a waterfall approach. These projects took a very long time to deliver (months to years), were costly and always went live after the official go-live date (delayed). Performance load testing would happen after functional testing, and quite often this meant that you would start with performance test executions just days before go-live. So there really was no time to do decent test execution and test analysis and no time what-so-ever to solve any bottlenecks. If testing does not bring added value, it should not be done at all.

DevOps and Performance Testing

With Digital Transformation the change from Waterfall to DevOps was needed so business could release more efficiently and quickly to meet customer demand. Instead of having one major release as a big BANG, in a DevOps delivery model, smaller features are being released in iterations. Iterations are typically 4 to 6 weeks but can be daily or even hourly! This move to DevOps meant that we had to change the approach of performance testing. We could not spend weeks and weeks of scripting and cleaning data, and modifying frameworks. The focus of Performance Testing changed to adding value as quickly as possible. Performance Testing moved from E2E Performance Testing to Low Level Testing (Shift Left) and Operational Monitoring with Application Performance Management (APM) tools (Shift Right). Performance Testers should not only spin in the middle but should move more to the left and to the right.

Shift Right

When defining a Performance Test Approach in a DevOps delivery model, I see Shift Right as low hanging fruit. Implementing a decent Application Performance Management (APM) tool is a no-brainer. An online business that really cares about performance and therefore cares about revenue and branding and who puts their client first should look at implementing a modern APM solution (Dynatrace, AppDynamics, New Relic). APM solutions measure the performance of an application by measuring the performance of a “business transactions” throughout the system components. You can easily pinpoint which IT component is causing delays. You can drill down to code level or SQL query level.  These insights are absolutely required to lower the risk when you release often to production. You get immediate feedback after a release. You know if the feature is used (bonus!) and if the release caused a degradation in performance. A test engineer should have access to APM frameworks in production so he understands what happens in production (workload, slow transactions, error rate) and which functionality performs badly. Based on this knowledge he can conduct specific tests to improve the software.  APM provides a test engineer with the means to turn testing from a problem raising activity into a solution providing activity. And think about it… is it not much nicer to provide solutions rather than nagging about problems?

Shift Left

Time and knowledge are limited so we need to spend it wisely! Test features that pose the highest risk. A risk assessment of the features to be released during the next iteration is a great start. Typically you can use a scorecard system for the assessment. Scores can be provided based on business importance, technical complexity, load, front-end facing, new or changed code, etc. The features with the highest score require the most attention.

Is it not funny that Test Engineers are sometimes blamed for issues they detect. Excuse me! Who wrote the code??! So is it not time that developers take more proud of the code that they write and start doing code profiling? For JAVA there are some great profiler like VisualVM and JProfiles. Profilers can help the developer to detect performance bottlenecks in their code. With Rapid Delivery it is important to profile the most crucial code.

E2E Web-based load test scripts are very sensitive to change and take time to write. This is wasted time as creating load test scripts don’t add value. Executing load tests and analyzing the results add value. Tools like Neoload focus on reducing the time to scripts which is very useful for DevOps projects. Instead of writing (only) E2E scripts, it may be more efficient to create an automated test framework based on API calls (rest/soap). Rest and SOAP calls are easy to create and typically don’t change so often. Building a regression framework to benchmark a release or a build (CI) can accelerate testing and increase efficiencies.

Conclusion

The image below provides a great view of what performance testing in a DevOps world means. A test engineer should not limit testing to End-to-End testing but should look into Shifting Right and Left.

As a performance engineer, you need to “dance the dance” and not “fight the fight”.  When DevOps is done well and a culture of cooperation and trust is created, a test engineer will start to love his passion again …. like I did. It will be easier to dance the dance and less frequently you will hit your head against the wall. I absolutely love DevOps and look forward to what the future of digital transformation will bring.

For more detailed information about testing in a DevOps world, the following three blog posts can be of help:

 

To know more about Stijn Schepers, here is his biography!

You can read the Stijn Schepers PAC presentation here.
You want to know more about this event?

7 Trends in Modern Load Testing

$
0
0

Comprehensive load testing is now a critical part of QA activity in the modern enterprise. The discipline has grown considerably as more computing resources reside within Cloud infrastructures. In fact, Cloud Computing forces application developers to support a degree of scale and speed that was unthinkable when the standalone PC was the norm. As expectations grow, so too does the burden of performance and load testing.

Trends are emerging as new companies try to meet the new demands of performance testing in general and load testing in particular. These trends are:

  1. Putting More Emphasis on Shift Left in the Automated CI/CD Pipeline
  2. Shifting Right with Application Performance Monitoring
  3. Cloud-based Test On-demand
  4. Keeping on Top of the Google Effect
  5. The Need to Support Event-driven Messaging Architectures
  6. More Devices, More Often
  7. Weaving AI into the Fabric of Load Testing

Let’s take a look at the details.

1. Putting More Emphasis on Shift Left in the Automated CI/CD Pipeline

QA is demonstrating a growing embrace of the Shift Left movement. This “movement” is an analogy derived from a project management chart in which the progress of tasks toward the completion of a project moves from left to right. The Shift Left sensibility puts more emphasis on tasks at the beginning of a project than at the end, hence the notion to “shift left.”

QA is embracing Shift Left by implementing more automated load testing early on in the development process. There’s a growing trend among test engineers to achieve short bursts of low-volume load tests as code gets deployed throughout the sprint rather than only performing load testing intensively toward the end, just before production release.

The benefit of Shift Left is that it uncovers problems early on. Fixing issues early in the software development cycle is considerably less expensive than addressing them downstream. For QA, the Shift Left motto is, test early, test often, fixes fast, fix cheaply.

2. Shifting Right with Application Performance Monitoring

Companies are also Shifting Right. However, it’s not humans who are doing the shifting; it’s the technology. More companies are using Automated Performance Monitoring technology to keep an eye on code once it’s shipped in production. The technology is automatically keeping track of an enterprise’s software assets for signs of stress and failure. Companies will Shift Left to load test during development. Then, they’ll perform essential full-scale, regression load testing just before release to production. Once the code is active, the Automated Performance Monitoring technology provides ongoing attention required to make sure application and servers are running according to the conditions set in the operational Service Level Agreement.

There’s a lot of code running on the Internet today. Automated Performance Monitoring is essential to ensuring that it’s running as it should.

3. Cloud-based Testing On-demand

The benefits of cloud-based computing have spilled over into the testing domain. Having experienced the cost-effectiveness of “pay only for what you use” computing, more companies are relying more on service providers to dynamically provision the testing infrastructure required to meet any testing scenario at-hand. It makes sense. To do otherwise is a foolish use of money.

Not every service provider is well-suited to provide testing environments, and tools and workflow requirements are too specialized. However, the need is growing. The result, more companies whom will emerge on the technical landscape focused on providing state-of-the-art testing services at web scale.

4. Keeping on Top of the Google Effect

“Each time a consumer is exposed to an improved digital experience, their expectations are immediately reset to a new, higher-level.” Brendan Witcher, Principal Analyst at Forrester

As Google pushes the envelope in term of scale and speed of activity on the modern Internet, there is a growing user expectation to want faster and more reliable applications. It’s called the “Google effect.” Google sets the bar for performance higher, and then every other application must meet that new standard. The Google effect results in more companies turning to automation and cutting-edge technologies just to remain competitive. Service Level Agreements are going to be more aggressive. Testing activity, mainly load and performance testing, is going to become more ambitious regarding baseline expectation. The pressure will not let up. There will be winners, and many losers, particularly among those companies who do not have the resources to accommodate continuous innovation.

5. The Need to Support Event-driven Messaging Architectures

Network latency is a killer for any distributed application. Web applications that run over HTTP are particularly susceptible to high degrees of latency due to the nature of the protocol. One of the benefits of an event-driven, messaging architecture is that it avoids the latency inherent in HTTP. Applications can just fire and forget. It’s a different way of doing business.

More companies are taking advantage of the benefits of messaging. But, messaging brings its own set of problems: dropped messages, inadequate capacity to meet the storage demands and yes, latency in message distribution between publisher and subscriber. To mitigate the growing risks, companies are going to need to load test beyond the standard HTTP request/response. Ultimately, creating a different way of doing business requiring not only different testing tools and methods but also a different mindset.

6. More Devices, More Often

The SmartPhones and Internet of Things (IoT) have been taking over the Internet. According to Forbes, since 2014, SmartPhone sales have doubled that of non-phone devices. BusinessInsider predicts that in 2018, IoT will be more significant than the smartphone, tablet, and PC markets combined. With smarter automobiles and wearable devices, the transmission and consumption of data are taking the digital world to levels that were previously unimaginable. All these devices and the endpoints to which they transmit data, need to be tested. IoT is also experiencing involvement in life and death scenario management, for example, enabling the safe operation of a self-driving long-distance tractor-trailer. Suffice it to say; these devices operate 24/7 with no let-up.

Companies are going to have to come up with new ways of testing that works seamlessly with these new devices. Load testing in the IoT infrastructure will need to go beyond traditional methodologies. The nature and scope of performance are different. For example, how do you inflict threshold load on a driverless vehicle? Stressing out a database is easy to achieve by comparison.

The growth of devices, their demands that companies are going to contend with require an adjustment to the approach to testing using new ideas. There are few options otherwise.

7. Weaving AI into the Fabric of Load Testing

Companies are becoming more visionary regarding the use of AI during their load testing. The growth of automation testing has also increased the volume of production and test result data available for analysis. The amount of data is so significant that the only way to make sense of it all is to use Artificial Intelligence and Machine Learning. AI and Machine Learning are particularly useful when it comes to anticipating potential failure events and scaling up the environment to avoid disaster. Although it is still unclear what will be possible with AI, it is undoubtedly an essential domain of interest in the short to mid-term for performance engineers. However, companies understand the benefit of using AI in the testing landscape and will be using it more to meet the growing demand to keep services up and running 24/7.

Putting it All Together

The common thread that runs through these seven industry trends is the need to devise new ways to meet the testing demands of a technical infrastructure that continues to grow bigger and faster. Best practices for load testing need to evolve to meet these emerging demands. Companies that meet the challenges ahead are sure to prosper, as will the customers and employees they serve.

Learn More

Discover more load testing and performance testing content on the Neotys Resources pages, or download the latest version of NeoLoad and start testing today.

 

Bob Reselman 
Bob Reselman is a nationally-known software developer, system architect, test engineer, technical writer/journalist and industry analyst. He has held positions as Principal Consultant with the transnational consulting firm, Capgemini and Platform Architect (Consumer) for the computer manufacturer, Gateway. Also, he was CTO for the international trade finance exchange, ITFex.
Bob’s authored four computer programming books and has penned dozens of test engineering/software development industry articles. He lives in Los Angeles and can be found on LinkedIn here, or on Twitter at @reselbob. Bob is always interested in talking about testing and software performance and happily responds to emails (tbob@xndev.com).

Mitigate Digital Transformation Risk with a Realistic Testing Strategy

$
0
0

Digital transformation has been adopted by organizations across the globe, allowing them to streamline business process, accelerate innovation, and pursue new revenue sources. It has compelled companies to focus on essential organizational competencies that promote ideation, collaboration, and flexibility. In short, digital transformation has allowed businesses to replace their legacy low-scale, low-leverage business point solutions with more nimble tools that transform enterprise offerings and help organizations offer value faster and achieve better business results than ever before.

So what’s the problem? And as a tester, why should you care?

Inherent Risks of Digital Transformation – Technical Failure and Revenue Loss

In today’s competitive, global, and digital world, customers demand even more value, personalized communication, targeted messaging, and excellent user experiences. Customer demand is driving organizations to work feverishly on being more agile, customer-centric, and efficient as they strive to capture every opportunity to increase revenue. Amazon and Google achieve their differentiation via a relentless focus on the quality of service. Competitive advantage and market dominance protection stem from the commitment to excellence – elegance in application design, optimized speed and performance, and extreme usability. A key priority in this battle for dominance is application performance – performance undergirded by a relentless focus on load testing.

Brand offerings must be strategically tested to minimize errors, delays, and the crashes resulting from traffic spikes and connectivity lapses. Strategic testing, however, does not imply limited testing. In fact, the application under test must be exposed to comprehensive, across-the-board test interaction with every application layer. Respective applications must maintain quality benchmarks while ensuring exemplary user experiences and delivering promised value. When offerings are not adequately tested, companies not only lose revenue, they experience loss of goodwill, must contend with social media outrage, and invest effort and resources to win back the confidence of frustrated customers.

Consider a local application example. Two days before Christmas, a friend decided to buy a Walmart gift card. He attempted to purchase the card five times on Walmart’s site. Each time, he input his credit card number, completed the transaction, and received a confirmation notification. And each time, within 30 minutes, he received an email from Walmart declaring that they were unable to process the transaction. Frustrated, my friend finally gave up and purchased a Target gift card using the same credit card he used on the Walmart site. The operation went through on the first try.

This scenario drives home several points:

  • Walmart lost a $100 transaction and an undetermined amount of additional revenue from the card’s intended recipient.
  • It’s unlikely that my friend was the only prospective gift card buyer that day as many last-minute shoppers were contemplating a Christmas gift card purchase. Walmart lost that revenue too.
  • My friend and undoubtedly many other gift card shoppers were warmly welcomed by Walmart’s competitors, vendors who were just a few clicks away, ready and eager to process any and all gift card transactions.

Realistic Test Strategies Reduce the Risk of Application Failure

Testers formulate realistic test strategies to ensure sufficient testing coverage within the timeframe allotted for a test. QA teams understand that it’s not possible to test 100% of any application. Therefore, their test strategy must address the application facets and components that pose the most risk. This sounds simple, but in the world of digital transformation, formulating test strategy itself can be fraught with risk.

Risk creeps into applications from many directions. Consumers in the digital economy continually elevate the thresholds that describe the user experience and provide real-time feedback directly to businesses, while demanding faster transactions and more efficient, personalized digital experiences. On the business side, initiatives to expand the reach and increase monetization opportunities introduce more complexity into enterprise architectures and ecosystems by incorporating analytics, mobile devices, and the Internet of Things. Add in fluctuating market forces and the advent of new technologies, and the emphasis on application performance just got more intense. As their test timeframes decrease, testers need to prioritize testing application components that support transactions as opposed to those that support, say, customer profile control. The first supports revenue and the second may arguably impact the user experience.

Neotys has referenced that QA teams can identify 75-90% of defects and performance issues using only 10-15% of their most successful test scenarios. However, for this to work in practice, testers must have access to reliable builds, have sufficient application architecture awareness (something often easily addressed in agile and collaborative environments), and have access to the appropriate tools for diagnosis. And of course, they need to craft realistic yet comprehensive test strategies.

The Essence of a Realistic Testing Strategy

How the word “realistic” gets defined depends on with whom you speak and where they work. Testing strategies vary by organization, depending on the Application Under Test (AUT), resource and tool availability, and desired business goals. The reasons for this are many. The business needs to define its quality benchmarks and designate what pillars of quality are most critical to the brand. Further, team dynamics influence testing process flexibility and speed. For example, agile environments demand that load tests be executed at the beginning of the development process and that the application proceeds through a continuous test. Other development environments may not stipulate this requirement.

There are several fundamental facets of a realistic test strategy.

Design Realistic Tests

Testers need to understand how software applications should respond to real-world scenarios; this insight provides the basis for successful performance test design helping teams prioritize what areas of the application are most risky. As part of this process, testers must consider the devices types, environments, anticipated load, and data types that must be supported within the application ecosystem. Then, needing to align this understanding with their preproduction environments to assess what scenarios can be tested in preproduction versus production.

Consider Key Performance Targets

Service Level Targets

  • Availability or “uptime”: Amount of time an application is accessible to the end-user.
  • Response Time: How long it takes for the application to respond to user requests, typically measured as system response time.
  • Throughput: Measures the rate of application events (E.g., the number of web page views within a specified period).

Capacity Targets

  • Utilization: Capacity of an application resource. This has many parameters that relate to the network and servers, such as network bandwidth, system memory, etc.

Define and Quantify Performance Metrics

  • Expected response time (time required to send and receive a request-response)
  • Average latency time
  • Average load time
  • Anticipated error rates
  • Peak activity (users) at specified points in time
  • Peak number of requests processed per second
  • CPU and memory utilization requirements to process each request

Be Mindful of the User Experience

The people using the application may reside in different locations (geographically) which can impact bandwidth, data transfer efficacy (packets dropped), and latency. User behavior describes how users interact with your application. Understanding these behaviors and the paths users take to navigate through workflows is an essential underpinning to realistic test strategy. Further, user devices of preference will be varied and possess different hardware, firmware, and operating systems. Realistic performance test design needs to take all these factors into account.

Conclusion

As businesses journey through the cultural and process changes that result in digital transformation, the risk will permeate all areas of application development. To maintain service levels, availability, capacity, and response time, QA teams must focus on application speed and performance. Whenever testing places too little emphasis on these areas, the risk of application failure and revenue loss increases dramatically. To ensure application success, QA teams need to craft test strategies that prioritize performance testing and administer it earlier in the development cycle.

Learn More

Discover more load testing and performance testing content on the Neotys Resources pages, or download the latest version of NeoLoad and start testing today.

 

Deb Cobb
Deb Cobb has deep expertise in enterprise product management and product marketing. She provides fractional product marketing, go-to-market strategy, and content development services for software and technology companies of all sizes. To learn more about Deb, her passion for collaborative thought leadership delivery, review her portfolio, or to connect directly, click here.

How to Extract Dynamic Values & Use NeoLoad Variables

$
0
0

Designing a load test involves the creation of a script – a set of calls, requests, and actions to an application server. The “script,” in its raw form, is usually a recording of an emulated user interacting with the application.

With any typical user session accessing an application server, there are parameters (and values) that give this specific session its unique fingerprint. Why is this important? If left unmodified, these parameters with recorded values will play back to the server in their original value(s). The initial static values are no longer valid and were usually specific to the original recording, but not suitable for any playback of the script. Some of these parameters include session IDs, tokens, timestamp values, and Universally Unique IDentifiers (UUIDs), and must be different every time the script runs.

Dynamic Parameters

The handling of these static values so that they appear dynamically distinctive each time a load test runs is called correlation, or what many refer to as-as the bread and butter of “design.” Many of the best load testing tools aid in the identification and handling of dynamic parameters. Some tools, like NeotysNeoLoad, can automate handling the many occurrences of dynamic parameters through a feature called Frameworks. This can save hours and days designing and maintaining a script in preparation for load testing. Once you identify what is dynamic in your script and create correlation for it (extraction using a regular expression and replacement), you promote it to a Framework. This allows NeoLoad to search for dynamic values throughout your script and automatically handle the correlation. The time saved can be significant, which makes your job easier and less stressful.

Google Ad Services Request in NeoLoad

Some examples of unique dynamic values that web applications use are in the form of timestamps. Since time is something specific to when an application and user is accessing the application, the value will eventually become invalid and need to be replaced with a variable that makes it dynamic. Most web applications use Epoch timestamp values, or current milliseconds from January 1, 1970. Take this screenshot (right), for example, using a Google Ad Services request.

It uses a value of “1519246164376” for the “random” parameter, which is the equivalent to Wednesday, February 21, 2018, 3:49:24.376 PM GMT-05:00 (Eastern US time zone). If you try to replay the script using this value after a day or more, the server is likely to reject your request, yielding a script error. With NeoLoad, you can replace this value with a variable based on Current Date using the current time in milliseconds pattern (Epoch time).

Session Identification

Almost all web applications use some form of session identification to designate an individual user’s application access time, thus making it unique. This particular parameter will contain an alphanumeric value that is stored in a cookie, form field or URL. The value can sometimes incorporate a timestamp (see above) or other more complex factors, which may be in the likeness of “jsessionID” or UUID. While NeoLoad typically handles session IDs very well, it can just as easily handle complex methods and correlation. No matter how it is dealt with, when a user (or virtual user) visits a website, it is given unique session info that is either valid for as long as the browser is open or for a specified amount of time. This is why it is essential to ensure that you’re using dynamic parameters.

Dynamic Parameter Identification

As you can see, it’s important to identify and handle parameters with dynamic values. Using static values from a recording will result in script and load test failure. Being able to determine what parameters are dynamic is equally as important as how you’re handling them. There are some fundamental ways (which this article will not go into), to help with identification – one common practice is to compare a recording to a playback of the same script, looking for value similarities. Pay attention to any errors in responses that might give clues to what was sent improperly. If a session ID that is assigned is invalid, you will likely generate a 403 or 500 status code. When it comes to request comparison to identify dynamic parameters, NeoLoad delivers a side-by-side view. For now, let’s focus on how to handle a parameter, once you recognize it is dynamic.

JSESSIONID Parameter in Cookie - NeoLoad

In the example (left), we’ll use an online public demo e-commerce application called “JPetstore,” in which you can shop for animals to purchase. After recording the simple scenario of going to the main page, selecting the “Cats” category (including type), and proceeding to checkout for purchase, we will examine a dynamic parameter for the sample user’s session. In the example (left), the application is using a jsessionID parameter in a cookie to set the session info for the user visit, as seen in the following screenshot of the response using Neotys’ NeoLoad.

Regular Expression Extraction - NeoLoad

Usually, once you identify where a dynamic parameter is, you can create a regular expression extractor. This way, the value of this parameter is extracted from the server response for each script run and user session/visit. NeoLoad has an easy way to create an extractor using a variable. To illustrate this, refer to the image of a regular expression extraction that allows you to use it as a NeoLoad variable later on (next step).

You will notice the value at the bottom of the window allows you to confirm that your regular expression is correct. This is the first part of correlation – identifying the dynamic value and creating its extractor using a regular expression. The next is a manual step where you’re value replacing any requests using it with the variable. In this case, jsessionID, or within NeoLoad, ${JSESSIONID}. You would search for requests with the value “F47AC58DD1575B9CBBB1361944B5C571” and replace these with the NeoLoad variable from the extractor. In particular, this is where NeoLoad outperforms most of its competitors.

Converting Extractor in Framework - NeoLoad

As mentioned previously, NeoLoad’s Frameworks feature enables the automation of correlation, removing the time-consuming effort associated with dynamic parameters handling. It will even handle ALL occurrences of the dynamic parameters, so long as it matches the regular expression and can find a corresponding value in a request. Very often this can save dozens or more of hours trying to find and handle all your correlation throughout your scripts. To display this in-action, refer to the wizard and configuration (left) for promoting/converting an extractor where you already have a Framework.

Replacing Values for Dynamic Difference - NeoLoad

Once the Framework is created, it will search and replace any matches it finds for the regular expression and corresponding value as illustrated below. As you can see, the value is now replaced by the extractor variable. This allows the script to extract the value from the server response whenever the script is run, replacing the value in the request, thus keeping it dynamic and current for each user iteration.

As the extractor replaced many occurrences of the jsessionID value throughout the process, you can see how much time can be saved; it will be automated for any future recordings. This means that there is no manual work required for this value going forward.

Conclusion

Correlation is the core of designing scripts used in load testing. It can be difficult and time-consuming, as you need to identify what is dynamic first; find the response that you can extract from, and match all the occurrences in requests for that regular expression extraction. NeoLoad makes this process easy and saves you time by automating your correlations through Frameworks. With NeoLoad, you have more time to focus on other challenges with your load tests, and not be set back by handling your dynamic parameters.

Learn More

Discover more load testing and performance testing content on the Neotys Resources pages, or download the latest version of NeoLoad and start testing today.

How to Run NeoLoad Command Line Switches

$
0
0

Out-of-the-box, NeoLoad command line functions are potent tools for manually launching particular actions. An originating reason for us having them added was to provide the ability for users to utilize NeoLoad with ANT.

Command line functions are also handy for essential batch script automation, which is often the required method used by other integrated applications like the Continuous Integration tool, Jenkins. Additional access use cases for command line include Selenium in EUX mode, YAML for cloud load generators, or even variable injection at runtime. They can also be useful when the need arises to use a product in non-GUI mode.

Running from Command Line

The syntax for running a command line function, with switches, looks something like this. Get to a command prompt and ‘cd’ to change directory to your NeoLoad’s installed \bin directory, (E.g., cd C:\Program Files\NeoLoad 6.3\bin) then at the prompt type: NeoLoadCmd -project -noGUI

  • C:\archived\NeoLoad_Projects\Vanilla\Vanilla.nlp -launch scenario1 -report
  • C:\archived\NeoLoad_Projects\Vanilla\report.xml

Breaking Down Command Switches

  1. NeoLoadCmd: starts NeoLoad.
  2. Project projects/store/store.nlp: A project file opens.
  3. Launch scenario1: This scenario starts (name must be spelled correctly).
  4. noGUI: The NeoLoad GUI not opened.
  5. Report projects/store/reports/report.xml: A report file is created.

Checkout Shared Projects with Command Line

If you’re working with a Neotys Team Server (NTS) and have a shared project, you can use switches like -Collab, -CollabLogin, -checkoutProject to name a few, and can be used to manage your collaborative project. For an in-depth view of available commands see the list of arguments.

Connecting to Cloud Services and NTS Servers

Using –NTSCollabPath, -NTSLogin, -NCPLogin (for Neotys Cloud Platform Login), are available switches that take input for your command line functions to work. The format must be as follows: -NCPLogin “<login>:<hashed password>” or <token> For example: -NCPLogin “loginUser:VyVmg==”

Additional Switches with Command Line

Though there are too many to mention in this article, it’s worth noting that you can publish shared project Results with -publishTestResult (though this cannot be used with GIT, it does work with NTS’s SVN mode). There is also a handy comparison switch: -comparisonReport <files> which generates test comparison reports (comma separated list of report files). The base report is set using the -baseTest argument.

Breaking Down the Following Command

NeoLoadCmd -checkoutProject Project51 -launch “scenario1” -noGUI -NTS “http://10.0.5.11:18080” -NTSLogin “noure:QuM36humHJWA5uAvgKinWw==” -leaseLicense “MCwCFC54ZB4sNH1q9RnNrlKi7MM0q+20AhRNmr10XW3c3qIdrzSpyQbAIBCwqQ==:50:1” -exit -NTSCollabPath “/repository_1”

  • checkoutProject: This checks out the project from NTS.
  • NTS “http://192.168.0.56:18080”: This specifies the NTS server for the license and the project.
  • NTSLogin “user: AgY58iAvgK==”: This specifies the User ID for the NTS server.
  • NTSCollabPath “/repository_1”: This specifies the repository path on the NTS server.

Note: The -checkoutProject command will only use the NTS server specified if the -NTSCollabPath is also determined. Reminder to make sure to check out the full list of command arguments.

Conclusion

Command line functionality, initially added for our integration with ANT, has since evolved into a robust out-of-the-box set of tools allowing access to nearly all of the functions of Runtime and Results.

Learn More

Discover more load testing and performance testing content on the Neotys Resources pages, or download the latest version of NeoLoad and start testing today.


Understanding Test Reuse in Functional and Load Testing

$
0
0

The are few people on Planet Earth who want to reinvent the wheel. This is particularly true for those who practice the art and science of software development. Modern software development is all about reuse: writing code once and using it over and over again in a variety of situations.

Code reuse is the path to maximum efficiency, not only for programming but also for testing. However, the guidelines for reuse in testing are a bit more complicated, especially regarding functional testing and load testing. The intentions are different. The scope is different. The way you approach test reuse is different. Allow me to elaborate.

Code Reuse in Functional Testing

The concern of functional testing is to ensure that all aspects of the software under test work according to expectation. For example, imagine we have an application that calculates vehicle insurance rates based on the profile of the driver and the vehicles he or she owns. A user submits his or her information to the application, and the application responds with a rate.

Listing 1 below shows an example of two of the many datasets that are appropriate for use as part of the functional test of the application. Each dataset reflects the profile and vehicle information of a particular customer.

Listing 1: Sample datasets for use in functional testing

The functional test submits the customer profile information to the application’s web page interface and also to the API that represents the backing application logic. (Please see Figure 1). The test receives a response and performs a variety of assertions upon the response based on the business rules in play. For example, did the information make it through validation properly? Was the insurance rate that the application returned in the response accurate? The functional test might record the timespan between initial request and final response in order get a sense of which tests are running long, beyond expected norms.

Making Reusable Functional UI Testing Easy

One of the easiest ways to create reusable functional UI tests is to use a tool that allows test developers to record UI interactions. Selenium is a favorite tool that test developers can use to record test interactions on a web page. However, Selenium does not allow recording UI activity on native mobile apps. Neoload picks up where Selenium leaves off. Neoload allows test developers to record UI activity in native mobile applications. The recorded test scripts can be reused later on as part of the functional testing process.

Also, behind the scene, there are some system monitors that observe the behavior of the components that make up the system.

Figure 1: Functional tests measure the accuracy of a response based on data submission as well as the timespan between request and response

After the test scripts execute the request and response behavior, test information is gathered. The information collected will include the assertion results, timespan measures as well as the information gathered from the system monitors. The system monitors will detect side-effect errors that might not be apparent in the test results. Testers analyze the data compiled to determine if the application gets a passing grade and is ready to move onto the next phase of testing. If the functional tests fail, the application is sent back to development for remedy.

About our vehicle insurance application test example, we’ll want to generate customer profiles in an automatic and random manner, and we’ll want to know beforehand what the correct results should be. Some functional testing might require ten distinct customer profiles, while others could be 10,000. Wise functional test planning will include creating reusable code that allows test scripts to create random customer profiles. Once such reusable code is created, getting customer profiles requires nothing more than telling the code the number profiles to generate, for example:

var profiles = profileGenerator.getProfiles(numberToGenerate);

This sort of reuse makes functional testing easily repeatable while offering a high degree of variety and reliability. This is good and useful for functional testing. However, when it comes to load testing, we need to take another approach to code reuse.

Implementing Code Reuse in Load Testing

Load testing is different from functional testing. Functional testing is about making sure code behaves as expected regarding accuracy and failure. Load testing ensures that an application can stand up to the rigors of high-stress usage. Functional testing is about code. Load testing is about usage and environment. There is ample opportunity to take advantage of reuse in load testing, but the approach is different from that of functional testing.

With functional testing, we need many different types of customer profiles to ensure the application responses are accurate.

For load testing, we are going to need two types of customer profiles. One kind of needs to have a large number of vehicles defined with random data generated dynamically. Another, different kind of profile needs vehicle data that is static. It’s a question of testing data against persistent storage, e.g., databases, and testing against the cache. When an application keeps seeing the same load test data continuously, it will defer to its caching mechanisms for data. When data is new and unknown previously, applications will retrieve data from storage. Measuring load tests regarding cached and uncached data provide useful insights into application performance under load.

Once we define static and dynamic customer profiles, we will assign them on the fly as part of a virtual user’s request behavior. Using virtual users allows us to leverage to power of reuse in load testing. (See Figure 2.)

Figure 2: A load test creates a large number of virtual users, each executing a limited test script designed to create a measurable load burden

A virtual user (VU) is a computer-generated representation of a person interacting with an application. Just as we created customer profiles to submit to the system for functional testing, we create virtual users to provide customer profiles.

A virtual user can be configured to send a request from a specific locale, using a particular browser. For example, creating a VU that enters data into a Firefox browser from an IP address in France or another VU that enters data in an Opera browser from an IP address in Japan. The critical thing about reuse in load testing is that it provides the ability to automatically create any number of virtual users according to a variety of configurations. Testers use a single piece of code to generate 10 or 10,000 virtual users depending on the need at hand. Being able to increase or decrease the load on demand is an extraordinarily powerful feature when it comes to implementing flexible load testing.

Putting it All Together

Functional reuse focuses on creating tests that ensure operational accuracy for each particular point of access to the application. Thus, the reuse needs to accommodate more granular application surface. Load test reuse requires that tests can be applied at a higher level of execution, focusing on speed and fault tolerance. Test reuse is a robust approach for any company that wants to save time and money in its testing process. The trick is to make sure that the right type of reuse is applied to meet the need at hand.

Learn More

Discover more load testing and performance testing content on the Neotys Resources pages, or download the latest version of NeoLoad and start testing today.

 

Bob Reselman 
Bob Reselman is a nationally-known software developer, system architect, test engineer, technical writer/journalist and industry analyst. He has held positions as Principal Consultant with the transnational consulting firm, Capgemini and Platform Architect (Consumer) for the computer manufacturer, Gateway. Also, he was CTO for the international trade finance exchange, ITFex.
Bob’s authored four computer programming books and has penned dozens of test engineering/software development industry articles. He lives in Los Angeles and can be found on LinkedIn here, or on Twitter at @reselbob. Bob is always interested in talking about testing and software performance and happily responds to emails (tbob@xndev.com).

How to Configure NeoLoad for Mobile Testing

$
0
0

Using NeoLoad for mobile testing is available for native, hybrid, web, and secure applications. Each of these application types requires utilization of HTTP content for data transfer for NeoLoad to consume into a user path.

Mobile testing is a powerful addition to the many features within NeoLoad. Not only can you test your applications from a use case standpoint, but you can also emulate users who are accessing your test as if they were using smartphones.

When emulating, both iPhone and Android models are covered with a variety of versions from which to choose. How to handle this during a test is explained later in this article.

To configure a recording via mobile device requires setting up proxy mode, although tunnel mode recording can also be used if the proxy method is unavailable.

Start Recording in Proxy Mode

Once at the “Start Recording” dialogue, click on the Advanced tab:

The modes group box is the place we mentioned earlier to choose the recording mode type.

  • Proxy: Selection of this option enables recording launch in proxy mode.
  • Tunnel: ”     ”     ” tunnel mode.

Proxy and Mobile Recording

To record a native application in the proxy mode as most native apps are designed with an API that utilizes a proxy, NeoLoad can act as that proxy to catch the activity. The communication between the device and the server is recorded through the proxy based recorder. To set it up correctly, both NeoLoad and the mobile device must be on the same network (sometimes, this requires the disabling of other networks on the mobile device). At this point, you have the mobile device connected to the same network as NeoLoad in Wifi. On the mobile device, it is necessary to modify the proxy used by the app or browser manually. The hostname must be the name or the IP of the NeoLoad machine. The port NeoLoad uses to record is 8090 by default.

Also, the proxy mode option must be selected, and it is recommended to uncheck the “Start Client” option as the mobile device itself generates the traffic. As soon as you start recording, the mobile application can be used, and NeoLoad will catch the HTTP traffic to create the test scenario’s user path.

Tunnel Mode and Mobile Recording

When an application does not support the definition of a proxy, it is necessary to use tunnel mode. This mode makes it possible to record any mobile application, be it web, native, standard, or customized, for any platform, any version. However, if your application supports proxy, it’s strongly recommended to use this method as it’s typically more straightforward than the tunnel mode counterpart.

To record in tunnel mode, switch the option from the proxy to tunnel mode via the advanced recording dialogue by de-selecting “Start Client.” The NeoLoad tunnel function makes it possible to launch the NeoLoad Recorder without using the NeoLoad Proxy. In this way, the tunnel mode simulates the web server for the mobile device, and it processes the request and responses as the web server.

Important Tips: 

  • To get the mobile recording to function correctly, you’ll need to install the Neotys certificate on the phone being used. Email it to your phone, clicking to install.
  • Additionally, when you set up your phone to access a gateway to the Internet, use the IP address of your NeoLoad controller. This sets up the browser on your phone to be the recording mechanism for NeoLoad.
  • Sometimes, it is not possible to set up proxy mode to work on your mobile device. When this occurs, using tunnel mode is the best alternative.

“Identify as” Mobile Recording

Another useful tool added into NeoLoad alleviates the need to even connect directly to a mobile device entirely. When starting a recording, click a check into the “Identify as” checkbox. On the right, a three-dot button […] becomes available.

A dialogue to select the desired device to emulate will open, choose iPhone/Android, and you can then use your browser instead of a mobile device to record your application the same as actually using a smartphone.

Mobile Emulation During Testing

Aside from mobile recording to create your tests, sometimes you might want to emulate several iPhone/Android users accessing your application and performing the functions you’ve recorded. To do so is quite simple. When you set up your test, you must first create a population. It’s at the population level that you can select various options for these users to emulate (click on the Population tab in the design section of NeoLoad).

Click the plus sign at the bottom left under the population window, and it will ask you to name your new population. Once this is done, on the right side window, you’ll see options such as User Path, Percent, Browser, WAN Emulation, and Handle Cache.

Click on the three-dot […] button to the right of “Browser,” and scroll to the bottom of the list. You’ll see multiple types of iPhone and Android options. Setting the smartphone emulation is as simple as that. To create users using various versions of phones, generate different a population for each and include them in your test.

It’s also important to note that there is a WAN Emulation feature that lets you select and emulate Mobile/Wireless carriers and network speeds. Click on the three-dot button […] next to WAN emulation, and it opens the “WAN Emulation Profiles Picker.”

It’s the same as with the browser – you are supplied with a search to make finding a WAN type easier. The WAN types include Broadband, LAN, Mobile, and Wireless service carrier types. You are also provided with latency and packet sliders to improve or decrease upload/download speeds even further. If you’ve selected a Mobile or Wireless service you can drill-down to the signal strength, from Good (3 bars), Average (2 bars), or Poor (1 bar).

The power of bandwidth emulation is useful when comparing user experience say, for example, a 3G set of users against a 4G set of users (see graph example comparison against your test environment at right).

You can see that comparing the population of 3G users against those with 4G provides you with the distinct differences for each metric point which can help isolate specific desired conditions.

Learn More

Discover more load testing and performance testing content on the Neotys Resources pages, or download the latest version of NeoLoad and start testing today.

If you have suggestions or would like to participate in this conversation, please let us know within the Neotys Community.

Neotys Delivers Latest Load Testing Platform Enhancements with NeoLoad 6.4 Release

$
0
0

For their Agile and DevOps-focused application development teams, leading companies want to achieve both speed and quality without making tradeoffs. That’s where NeoLoad 6.4 comes in, helping to meet increasing user demands, deliver apps to the market faster and more frequently, and release flawlessly with confidence, every time.

What’s New in This Release?

NeoLoad 6.4 continues Neotys’ focus on enhancing the value of NeoLoad to Agile and DevOps teams to speed testing. Enhancements in this release center on the areas of NeoLoad platform openness, Component/API “Shift Left” testing, and core load testing features.

NeoLoad Web – Centralized SLA Summary

Through NeoLoad Web, performance and load testers and other performance stakeholders can quickly understand and identify why a test has failed, with a new, centralized summary page that shows SLA status within NeoLoad. The SLA summary lets you see exactly which SLA failed, and precisely why it failed the defined SLA parameters.

Easier Validation for JSONPath and XPath Responses

Testers doing Component/API testing using NeoLoad 6.4 are now able to specify EITHER use JSON or XPath when creating a variable extractor or defining a response validation, independent of the Content-Type of the response. Now, JSONPath or XPath can be used for all requests including manually-defined ones. This new feature speeds up the workflow for testers.

 

 

 

Automated Cloud Test Infrastructure

DevOps-focused teams doing automated performance testing can now start and use a cloud session using the NeoLoad Runtime API. The API lets users specify the characteristics of a Cloud session, including the desired mix of cloud and on-premises load generators, and can refer to VUH or Cloud credits. This cloud session can then be automatically started and used by an automated test, resulting in a higher level of test automation, and saving time on each test iteration throughout the SDLC.

Additional APM Integration Available

NeoLoad 6.4 includes a new APM integration with New Relic.

NeoLoad’s integration portfolio with leading APM platforms – Dynatrace, CA, and AppDynamics – has just expanded to include New Relic. A new NeoLoad advanced action enables the analysis of load testing data using New Relic APM and New Relic Insights. This advanced action provides both an inbound (extract New Relic APM data to NeoLoad to be viewable within NeoLoad’s dashboards) and an outbound (extract NeoLoad load testing data and send it to New Relic’s Insights tool) integration.

The Advanced Action is available on Neotys Labs, including a Tutorial Document that shows how to use it.  It is also available from New Relic Plugin Central (in-product) and the New Relic website. Documentation & Screenshots available from https://github.com/Neotys-Labs/NewRelic/.

Brotli Server-Side Compression Method Support

Web-based applications running on servers negotiate a compression algorithm with browsers to compress the response content. Brotli can be used to compress HTTPS responses sent to a browser, in place of gzip or deflate. NeoLoad now supports a new server technology for compression, called “Brotli,” used to speed up content over the Web. It joins our existing support for gZip.

Learn More

Discover all of NeoLoad 6.4’s new and enhanced features on our What’s New and Technical Features pages, or download the latest version and start testing today.

Webinar Review: Don’t Let Your Cloud Migration Cloud Performance

$
0
0

During Q4 2017, Gartner forecasted that Worldwide Public Cloud Services Revenue would total USD 260 billion ($260 bn) in 2017 and estimated that attainment up to USD 411 billion ($411 bn) by 2020. Q3 2017 data from Synergy Research Group indicated that the cloud market continues to grow by over 40% per year with Cloud Migration, and AWS’s domination of the Infrastructure-as-a-Service market continues unabated.

Businesses continue to rely on the cloud to keep pace with the speed of innovation. Why wouldn’t they? It has revolutionized load testing: decreasing infrastructure spend associated with up-front test environment costs replacing with a pay-as-you-go model, elimination of time-consuming test environment set up, promotion of a collaborative environment via on-demand nature, and the ability to conduct efficient and realistic large-scale tests.

The key to load testing in the cloud is understanding how to apply the right tools and practices such that you achieve a balance between cloud-based and complementary on-premises load testing practices.

See the Webinar about Cloud Migration

We recently delivered a webinar in partnership with TechWell, aimed at providing some considerations for organizations migrating apps to cloud-based services like AWS and Azure: “Don’t Let Your Cloud Migration Cloud Your Performance.”

Despite the prominence of public cloud services, when prompted with “What kinds of clouds do your apps use?” a surprising majority (56%) of our attendees cited the use of a public and private cloud mix as compared to ALL public or private. Where does your organization stand on this?

Learn More about Cloud Migration & Testing

Discover more load testing and performance testing content on the Neotys Resources pages, or download the latest version of NeoLoad and start testing today.

Realistic Load Testing: Configuring Population Parameters & Launch Configurations

$
0
0

In NeoLoad, a population is a group of Virtual Users that are configured (by type) to test an application. Each type can be set to have a different business or network behavior to reflect realistic load testing. For example, you can simulate 50% of your users to use a web-based browser carrying out a purchase, while another 50% of users browse via a 3G phone browser. Configuring groups of populations in this manner enables you to test your applications realistically using simulated conditions.

Why are changes to a testing population so significant? For starters, emulation for both bandwidth and, hardware, along with identifying which user path to be tested – are configurable options which provide quantifiable metrics.

Let’s talk about two areas of the NeoLoad population component:

  • Design -> Populations Tab
  • Runtime -> Advanced Population Parameters

Important Population Parameters

Once you create a population, the selection of the three-dot […] button next to “Browser” is explained below.

Creation of a NeoLoad population is simple, here’s how:

  1. Your Design -> Populations tab (lower left) allows for addition or removal of populations via a +/- sign.
  2. Click the +, to add a population, and call this new population “Pop1.”
  3. Next, we will add a 2nd population (referenced later), and call it “Pop2.”
  4. Now, select “Pop1” on the left (you will see the following list of features that can be changed or modified: User Path, Percent, Browser, WAN Emulation, and Handle Cache options).
    • As mentioned above, we want to focus on the three-dot […] button next to “Browser” because there are some critical features inside that dialog which should be made known to all users of NeoLoad. We will cover the other features afterward.

Browser Profiles Picker

At the top, you can search for a specific browser (and phone type) to make selection easier.

Underneath the browser selection, you can see important features that provide the actions to disable/enable HTTP2. Although this is “on” by default, it is found that turning this to “off” can help solve certain problems as well as handle cookies (another setting which defaults to “on”). You can also set the number of parallel connections for HTTP1 (default is 6).

TIP: Should you encounter playback issues during realistic load testing, you might want to consider switching the “HTTP2 Disable” option to “off” altogether.

To address the other parameters, on the Populations tab, from top left to bottom right:

  • User Path: Allows you to select which path is associated with this population.
  • Percent: Because a population can include some User Paths, it is necessary to specify the portions they represent in the population. When running a scenario with a given number of Virtual Users, this percentage defines the number of Virtual Users of each type to be used.
  • WAN Emulation: The performance of an application is dependent upon the network. The WAN Emulation function enables the realistic recreation of existing IT or mobile network conditions with simulated bandwidth values, latency levels, and packet loss rates.
    • Note: You can also modify Download and Upload speed latency, packets dropped for the population.
  • Handle Cache:
    • As recorded to reuse the configuration at the scenario recording
    • New user to have Virtual Users started with an empty cache
    • Returning user: The Virtual User cache is up-to-date. The application does not return any response which contains information that is already cached.

Now, let’s switch gears and take a closer look at what choices can be modified during the launch of a test about population. If we click on the Runtime tab, we’ll find the Populations box again (this should contain both “Pop1” and “Pop2”). Let’s put a check into both populations, and while selecting the test scenario, click on the “Advanced” button at the bottom.

This brings up Advanced Population Parameters.

Here we have essential launch choices specific to the “Start Policy” for the selected population. Choose from:

  • Immediately launching the population at the start of the test
  • Delaying the launch of this population by X number of seconds, OR
  • Sequentially order the launch – the population after X other population has completed (in this case it would be “Pop2”)

Next, you can select how many users start when the population is chosen:

  • Virtual Users start
  • Simultaneously: All Virtual Users begin at the same time or
  • Sequentially: All Virtual Users start over a period of X seconds

Finally, you can select the end policy for the population.

Stop Policy

  • Immediate: Stops all Virtual Users at the end of a test no matter where they are -> going to their end containers
  • Delayed: Gives X seconds to finish their current iteration -> going to end container
  • Indeterminate: Allows all Virtual Users to complete their current iteration before stopping the test and -> going to their end containers

Tip: Indeterminate may extend a test considerably if an iteration has a User Path issue, so be aware that selecting this may cause tests to lag depending on the complexity of your User Path workflows.

Learn More about Realistic Load Testing

Discover more load testing and performance testing content on the Neotys Resources pages, or download the latest version of NeoLoad and start testing today.

If you have suggestions or would like to participate in this conversation, please let us know within the Neotys Community.

Performance Test Integration: The Benefits of an Agnostic Approach

$
0
0

I’m not usually a fan of vendor lock-in when it comes to performance test integration. I like being able to use the best product available to meet the need at hand. Were it otherwise; I’d still be driving around in a Ford, the car I drove right out of college; today I own a Kia. I’ve also driven a Saab, Toyota, Acura, Cadillac, Dodge, Pontiac, Mercury and Chevy pickup. As times changed, so did my needs. For me, it’s the car, not the manufacturer. The same is right for computer technology. I am not a Microsoft guy, any more than I am an Adobe guy, a Computer Associates guy or an Apple guy. Whether it’s software or cars, I use products that allow doing what I need to do, when I need to do it, in the best way possible. I need versatility.

The Value Proposition: Versatility Counts

My need for versatility extends to the tools that I use for testing. It’s rare to come across a vendor-centric, one-stop-shopping solution for testing, particularly performance test integration, which meets every need defined in a particular testing scenario. Yes, I’ve had episodes where a single testing package can do it all. But, those episodes were short-lived, typically brought to end when a new testing requirement came up, and the response from the vendor’s customer support line was the often heard, “sorry the product can’t do that, yet.”

In such cases I’ve been left high and dry, at the vendor’s beckon call, waiting for the next version to release. The problem was that I couldn’t wait a few months. I needed to solve my problem immediately.

I’ve learned my lesson. These days I avoid vendor lock-in and take an agnostic approach to performance test integration. My testing needs require that my tools play well together. Thus, the most important thing I look at when deciding the particular technology to use is how well it lends itself to integration. Broad support for integration is key to versatility. Versatility gives my testing solutions a long shelf-life. Hence, I end up doing a lot less work and saving my company a lot more money!

Building an Ecosystem that Supports Versatility and Integration

It’s been my experience that taking an agnostic approach to any software integration is key to understanding the ecosystem in which development takes place. Typically frameworks that support plugins or publish APIs lend themselves well to an agnostic approach.

Plugins Enhance Power and Extend Versatility

Software designers understand the importance of promoting third-party development to support feature integration. No one vendor has the staff and wherewithal to imagine every use a given product needs to help. This is why plugin architectures are so important. Plugins allow developers to extend a product to meet new needs by adding features far beyond the imagination of the original developer. Support for plugins is why products such as Jenkins, Eclipse, and WordPress have become so popular. Providing plugin capability has created a whole ecosystem of community developers that offer unique solutions well beyond what the product manufacturer could have done on its own. Plugin support is a win-win for all involved. A product with a large developer community that is writing a broad set of plugins provides the versatility, adaptability, and long-term viability a framework needs to stay relevant in the world of modern software development. Simply put, a plugin ecosystem without a third-party developer, ain’t.

APIs Provide a Standard for Integration

Plugins aren’t the only way to extend the power of a general solution framework. There’s also APIs. Most modern testing solutions provide an API that allows testers to manipulate test execution by way of code. Some APIs can be accessed directly on a local machine using a client library. Selenium Driver is one of the more well known local libraries. Software Development Engineers in Test (SDETs) use Selenium Driver libraries to execute selenium tests automatically on a local machine. Also, the driver can be used to perform tests that run on remote computers. One script can be used locally or remotely. The API is the same no matter where it is run. This consistency makes integration easy.

Many web-based testing products, running localhost or remote, provide RESTful APIs that can be used to integrate services on an as-needed basis. The beauty of a REST API is that all interaction is standardized under the HTTP protocol. No special libraries are needed. A REST-based API provides a one size fits all approach to testing and development integration. For example, it’s entirely possible for a test designer to integrate a system’s monitoring service and a separate performance testing service. All that’s required is that both services are accessible via a REST API. (See Figure 1.)

Figure 1: The REST standard allows a test runner to observe application behavior easily using a variety of data gathering resources

Whether integration is achieved using a plugin architecture or working with an API, the most important thing is that there is no vendor lock-in play. Yes, there are standards that must be supported, but these standards are not exclusionary. IBM can write a Jenkins plugin. Git can write a Jenkins plugin. Neotys can publish an API. I can create a WordPress plugin. No person or company is prohibiting development. All that’s required for entry into the ecosystem is adopting a standard open to all. Public access is essential for an agnostic approach to be viable.

NeoLoad allows testers to take an agnostic approach to performance testing.

NeoLoad was designed from the start to allow testers to take an agonistic approach to performance test integration. NeoLoad integrates easily with commonly used tools such as Selenium and Eclipse. (Here’s a link to a step-by-step video that shows you how to execute Selenium scripts under NeoLoad using Eclipse).

Also, NeoLoad Web provides a powerful REST API that allows testers to observe ongoing test and system behavior over a variety of systems as well as compare real-time and historic data. Testers can use NeoLoad’s APIs to get low-level system performance information or integrate with other RESTful system monitors in an agnostic fashion

Putting it All Together

Modern software development is fast and furious. Feature releases that used to take months now take days, if not hours. Automation and integration allow more software to get to users at faster rates. Testing is no exception. Modern test designers cannot afford to risk slowing down their efforts by dedicating themselves to one vendor. Instead, they need to be able to pivot on a dime to meet release demands. Using a new plugin or adopting a new API immediately instead of waiting for a single vendor to release a feature to achieve a mission-critical need provides the versatility required to be competitive in today’s economy. Taking an agnostic approach to performance test integration is not an option that’s nice to have. It’s essential for ensuring that your company is making the quality software your customers want.

Learn More about Performance Test Integration

Discover more load testing and performance test integration content on the Neotys Resources pages, or download the latest version of NeoLoad and start testing today.

 

Bob Reselman 
Bob Reselman is a nationally-known software developer, system architect, test engineer, technical writer/journalist and industry analyst. He has held positions as Principal Consultant with the transnational consulting firm, Capgemini and Platform Architect (Consumer) for the computer manufacturer, Gateway. Also, he was CTO for the international trade finance exchange, ITFex.
Bob’s authored four computer programming books and has penned dozens of test engineering/software development industry articles. He lives in Los Angeles and can be found on LinkedIn here, or on Twitter at @reselbob. Bob is always interested in talking about testing and software performance and happily responds to emails (tbob@xndev.com).
Viewing all 74 articles
Browse latest View live