Performance testing is the practice of evaluating how a system performs under a particular workload. .

Read more

The Performance Testing focus is –

  • Speed – Determines whether the application responds quickly
  • Scalability – Determines maximum user load the software application can handle.
  • Stability – Determines if the application is stable under varying loads

Why is performance testing important?

The goal of Performance Testing is not to find bugs but to eliminate performance bottlenecks.

The performance tests you run will help ensure your software meets the expected levels of service and provide a positive user experience. They will highlight improvements you should make to your applications relative to speed, stability, and scalability before they go into production. Applications released to the public in absence of testing might suffer from different types of problems that lead to a damaged brand reputation, in some cases, irrevocably.

While resolving production performance problems can be extremely expensive, the use of a continuous optimization performance testing strategy is key to the success of an effective overarching digital strategy.

Performance Testing is done to provide stakeholders with information about their application regarding speed, stability, and scalability. . Without Performance Testing, the software is likely to suffer from issues such as: running slow while several users use it simultaneously, inconsistencies across different operating systems, and poor usability.

More importantly, Performance Testing uncovers what needs to be improved before the product goes to market. Applications sent to market with poor performance metrics due to nonexistent or poor performance testing are likely to gain a bad reputation and fail to meet expected sales goals.

Also, mission-critical applications like space launch programs or life-saving medical equipment should be performance tested to ensure that they run for a long period without deviations.

According to Dunn & Bradstreet, 59% of Fortune 500 companies experience an estimated 1.6 hours of downtime every week. Considering the average Fortune 500 company with a minimum of 10,000 employees is paying $56 per hour, the labor part of downtime costs for such an organization would be $896,000 weekly, translating into more than $46 million per year.

Only a 5-minute downtime of Google.com (19-Aug-13) is estimated to cost the search giant as much as $545,000.

It’s estimated that companies lost sales worth $1100 per second due to a recent Amazon Web Service Outage.

In short, to ensure that it will meet the service levels expected in production, as well as deliver a positive user experience. Application performance is a key determinant of adoption, success, and productivity.

Hence, performance testing is important.

When to do a performance test?

Whether it’s for web or mobile applications, the lifecycle of an application includes two phases: development and deployment. In each case, operational teams expose the application to end-users of the product architecture during testing.

Development – Development performance tests focus on components (web services, microservices, APIs). The earlier the components of an application are tested, the sooner an anomaly can be detected and, usually, the lower the cost of rectification.

Deployment – As the application starts to take shape, performance tests should become more and more extensive. In some cases, they may be carried out during deployment (for example, when it’s difficult or expensive to replicate a production environment in the development lab).

What are the different types of performance tests?

Stress Testing

This test pushes an application beyond normal load conditions to determine which components fail first. Stress testing attempts to find the breaking point of the application and is used to evaluate the robustness of the application’s data processing capabilities and response to high volumes of traffic.

Spike Testing

This testing evaluates the ability of the application to handle sudden volume increases. It is done by suddenly increasing the load generated by a very large number of users. The goal is to determine whether performance will suffer, the system will fail, or it will be able to handle dramatic changes in load. This testing is critical for applications that experience large increases in a number of users; for example, utility customers reporting power outages during storms. This can be considered a component of stress testing

Load Testing

The purpose of load testing is to evaluate the application’s performance under increasingly high numbers of users. Load or increasing numbers of users are applied to the application under test and the results are measured to validate the requirements are met. This load can be the expected concurrent number of users on the application performing a specific number of transactions within the set duration. This test will give out the response times of all the important business-critical transactions. If the database, application server, etc. are also monitored, then this simple test can itself point towards bottlenecks in the application software.

Endurance Testing

Endurance testing evaluates the performance of the system under load over time. It is executed by applying varying loads to the application under test for an extended period of time to validate that the performance requirements related to production loads and durations of those loads are met. Endurance testing can be considered a component of load testing and is also known as soak testing.

Volume Testing

Also known as flood testing, this testing is used to evaluate the application’s ability to handle large volumes of data. The impact on response time and the behavior of the application are analyzed. This testing can be used to identify bottlenecks and to determine the capacity of the system. This type of performance testing is important for applications that deal with big data.

Scalability Testing

This testing is used to determine your application’s ability to handle increasing amounts of load and processing. It involves measuring attributes including response time, throughput, hits and requests per second, transaction processing speed, CPU usage, Network usage, and more. Results of this testing can be used in the planning and design phases of development which reduces costs and mitigates the potential for performance issue

Common Performance Problems

Most performance problems revolve around speed, response time, load time, and poor scalability. Speed is often one of the most important attributes of an application. A slow-running application will lose potential users. Performance testing is done to make sure an app runs fast enough to keep a user’s attention and interest. Take a look at the following list of common performance problems and notice how speed is a common factor in many of them:

  • Long Load time – Load time is normally the initial time it takes an application to start. This should generally be kept to a minimum. While some applications are impossible to make load in under a minute, Load time should be kept under a few seconds if possible.
  • Poor response time – Response time is the time it takes from when a user inputs data into the application until the application outputs a response to that input. Generally, this should be very quick. Again if a user has to wait too long, they lose interest.
  • Poor scalability – A software product suffers from poor scalability when it cannot handle the expected number of users or when it does not accommodate a wide enough range of users. Load Testing should be done to be certain the application can handle the anticipated number of users.
  • Bottlenecking – Bottlenecks are obstructions in a system which degrade overall system performance. Bottlenecking is when either coding errors or hardware issues cause a decrease of throughput under certain loads. Bottlenecking is often caused by one faulty section of code. The key to fixing a bottlenecking issue is to find the section of code that is causing the slowdown and try to fix it there. Bottlenecking is generally fixed by either fixing poor running processes or adding additional Hardware. Some common performance bottlenecks are
    • CPU utilization
    • Memory utilization
    • Network utilization
    • Operating System limitations
    • Disk usage

Performance Testing Process

The methodology adopted for performance testing can vary widely but the objective for performance tests remains the same. It can help demonstrate that your software system meets certain pre-defined performance criteria. Or it can help compare the performance of two software systems. It can also help identify parts of your software system which degrade its performance.

Below is a generic process on how to perform performance testing

Performance testing process image
  1. Identify your testing environment – Know your physical test environment, production environment and what testing tools are available. Understand details of the hardware, software and network configurations used during testing before you begin the testing process. It will help testers create more efficient tests.  It will also help identify possible challenges that testers may encounter during the performance testing procedures.
  2. Identify the performance acceptance criteria – This includes goals and constraints for throughput, response times and resource allocation.  It is also necessary to identify project success criteria outside of these goals and constraints. Testers should be empowered to set performance criteria and goals because often the project specifications will not include a wide enough variety of performance benchmarks. Sometimes there may be none at all. When possible finding a similar application to compare to is a good way to set performance goals.
  3. Plan & design performance tests – Determine how usage is likely to vary amongst end users and identify key scenarios to test for all possible use cases. It is necessary to simulate a variety of end users, plan performance test data and outline what metrics will be gathered.
  4. Configuring the test environment – Prepare the testing environment before execution. Also, arrange tools and other resources.
  5. Implement test design – Create the performance tests according to your test design.
  6. Run the tests – Execute and monitor the tests.
  7. Analyze, tune and retest – Consolidate, analyze and share test results. Then fine tune and test again to see if there is an improvement or decrease in performance. Since improvements generally grow smaller with each retest, stop when bottlenecking is caused by the CPU. Then you may have the consider option of increasing CPU power.

Performance Testing Metrics: Parameters Monitored

  • Processor Usage – an amount of time processor spends executing non-idle threads.
  • Memory use – amount of physical memory available to processes on a computer.
  • Disk time – amount of time disk is busy executing a read or write request.
  • Bandwidth – shows the bits per second used by a network interface.
  • Private bytes – number of bytes a process has allocated that can’t be shared amongst other processes. These are used to measure memory leaks and usage.
  • Committed memory – amount of virtual memory used.
  • Memory pages/second – number of pages written to or read from the disk in order to resolve hard page faults. Hard page faults are when code not from the current working set is called up from elsewhere and retrieved from a disk.
  • Page faults/second – the overall rate in which fault pages are processed by the processor. This again occurs when a process requires code from outside its working set.
  • CPU interrupts per second – is the avg. number of hardware interrupts a processor is receiving and processing each second.
  • Disk queue length – is the avg. no. of read and write requests queued for the selected disk during a sample interval.
  • Network output queue length – length of the output packet queue in packets. Anything more than two means a delay and bottlenecking needs to be stopped.
  • Network bytes total per second – rate which bytes are sent and received on the interface including framing characters.
  • Response time – time from when a user enters a request until the first character of the response is received.
  • Throughput – rate a computer or network receives requests per second.
  • Amount of connection pooling – the number of user requests that are met by pooled connections. The more requests met by connections in the pool, the better the performance will be.
  • Maximum active sessions – the maximum number of sessions that can be active at once.
  • Hit ratios – This has to do with the number of SQL statements that are handled by cached data instead of expensive I/O operations. This is a good place to start for solving bottlenecking issues.
  • Hits per second – the no. of hits on a web server during each second of a load test.
  • Rollback segment – the amount of data that can rollback at any point in time.
  • Database locks – locking of tables and databases needs to be monitored and carefully tuned.
  • Top waits – are monitored to determine what wait times can be cut down when dealing with the how fast data is retrieved from memory
  • Thread counts – An applications health can be measured by the no. of threads that are running and currently active.
  • Garbage collection – It has to do with returning unused memory back to the system. Garbage collection needs to be monitored for efficiency.

What does performance testing measure?

Performance testing can be used to analyze various success factors such as response times and potential errors. With these performance results in hand, you can confidently identify bottlenecks, bugs, and mistakes – and decide how to optimize your application to eliminate the problem(s). The most common issues highlighted by performance tests are related to speed, response times, load times, and scalability.

Excessive Load Times

Excessive load time is the allotment required to start an application. Any delay should be as short as possible – a few seconds, at most, to offer the best possible user experience.

Poor Response Times

Poor response time is what elapses between a user entering information into an application and the response to that action. Long response times significantly reduce the interest of users in the application.

Limited Scalability

Limited scalability represents a problem with the adaptability of an application to accommodate different numbers of users. For instance, the application performs well with just a few concurrent users but deteriorates as user numbers increases.

Bottlenecks

Bottlenecks are obstructions in the system that decrease the overall performance of an application. They are usually caused by hardware problems or lousy code.

Example Performance Test Cases

  • Verify response time is not more than 4 secs when 1000 users access the website simultaneously.
  • Verify response time of the Application Under Load is within an acceptable range when the network connectivity is slow
  • Check the maximum number of users that the application can handle before it crashes.
  • Check database execution time when 500 records are read/written simultaneously.
  • Check CPU and memory usage of the application and the database server under peak load conditions
  • Verify response time of the application under low, normal, moderate and heavy load conditions.

During the actual performance test execution, vague terms like acceptable range, heavy load, etc. are replaced by concrete numbers. Performance engineers set these numbers as per business requirements, and the technical landscape of the application.

There are a wide variety of performance testing tools available in the market. The tool you choose for testing will depend on many factors such as types of the protocol supported, license cost, hardware requirements, platform support etc. Below is a list of popularly used testing tools.

  • LoadNinja – is revolutionizing the way we load test. This cloud-based load testing tool empowers teams to record & instantly playback comprehensive load tests, without complex dynamic correlation & run these load tests in real browsers at scale. Teams are able to increase test coverage. & cut load testing time by over 60%.
  • HP LoadRunner – is the most popular performance testing tools on the market today. This tool is capable of simulating hundreds of thousands of users, putting applications under real-life loads to determine their behavior under expected loads. Loadrunner features a virtual user generator which simulates the actions of live human users.
  • Jmeter  one of the leading tools used for load testing of web and application servers.

FAQ

Which Applications should we Performance Test?

Performance Testing is always done for client-server-based systems only. This means, any application which is not a client-server-based architecture, must not require Performance Testing.

For example, Microsoft Calculator is neither client-server based nor it runs multiple users; hence it is not a candidate for Performance Testing.

What is the difference between Performance Testing & Performance Engineering

It is of significance to understand the difference between Performance Testing and Performance Engineering. An understanding is shared below:

Performance Testing is a discipline concerned with testing and reporting the current performance of a software application under various parameters.

Performance Engineering is the process by which software is tested and tuned with the intent of realizing the required performance. This process aims to optimize the most important application performance trait i.e. user experience.

Historically, testing and tuning have been distinctly separate and often competing realms. In the last few years, however, several pockets of testers and developers have collaborated independently to create tuning teams. Because these teams have met with significant success, the concept of coupling performance testing with performance tuning has caught on, and now we call it performance engineering.

Load testing is a way of ensuring that software works well under real-world scenarios.

A load testing tool is used to recreate the behavior of real users on a variety of software applications using virtual users (VUs). It can simulate anywhere between one and several million VUs, depending on the nature and requirements of the load test. This ability to reproduce so many users makes it an indispensable tool – as it is not possible for humans to conduct this kind of testing on such a large scale.

The tool can be used on-premise or in the Cloud. For extreme tests, where a large quantity of VUs is needed, many servers are required. In this case, testing in the Cloud is the better option as it is more easily scalable than an on-premise solution. A load testing tool is used to:

  • Create load testing scripts: simulating the activity of each VU during testing
  • Configure test parameters:
    • What duration of testing is required, and for how many VUs?
    • How many different types of users will be included in the test?
    • How many desktop users versus mobile users?
    • Where will the load come from on-premise or cloud infrastructure?
  • Perform the Test:
    • Execute test with the script and configured parameters in place
    • Define when it should be conducted
    • Consider running it from a Continuous Integration server (e.g., Jenkins)
  • Analyze the Results:
    • While the test is running and the application is on, the performance engineer must analyze software behavior
    • Such monitoring during operation can be done with the load testing tool or with other specific monitoring means (e.g., APM or Application Monitoring)

Why use a load testing tool?

Using a load testing tool can identify and solve bottlenecks the system might experience in different scenarios. This helps prevent problems from occurring in a live production environment – which might negatively impact the business.

By conducting testing in realistic scenarios, the load testing tool helps protect against poor software performance – including unsavory response time. It can also be adapted to help manage and monitor performance levels in a live production environment.

Who uses load testing tools?

A load testing tool is used by a number of different professionals:

  • Performance engineers who work in performance test centers: These engineers need the most advanced tools to create the proper scripts and scenarios for effective testing.
  • Developers working in Agile/DevOps teams: Developers are becoming more involved in performance testing. When working within continuous monitoring processes, they start testing the first lines of code as soon as the first APIs are developed – even before a graphical user interface is available. These developers need a simple tool that doesn’t require a performance testing expert and one that can be used for API testing. As developers, they usually like to work in code – so the tool should ideally allow them to create tests using coding.

How to choose the perfect tool for your needs?

Settings organizations should consider when making your decision.

CharacteristicsFunctional DescriptionFeatures and Benefits
Test Script DesignCreate realistic test script design tests quickly, including complex onesFor performance engineers: To create complex test scripts that simulate the diversity of real-world use cases for developers: To quickly generate API tests with code
Technical SupportSupport a variety of Web protocols – HTTP, Java, etc.; older protocols such as SAP GUI and Oracle FormsProvide extended protocol support so testers can test all their current applications simulate complex protocol behaviors to create realistic tests provide early support for emerging protocols
Mobile TestingSimulate mobile usersMobile behaviors are different from those of desktop. They must be isolated and simulated realistically – considering the specifics of mobile network conditions and different devices
On-premise/Cloud Load InfrastructureGenerate on-premise and cloud loads – or a combinationAn on-premise testing tool that is easy to install, but requires investing in your loading infrastructure cloud is more suitable for generating extreme load tests (those with thousands of VUs). It can create loads outside firewalls to simulate realistic conditions depending on the application being tested, it might be appropriate to adopt a combination of VUs using on-premise and cloud infrastructure
Load Infrastructure ManagementManage/reserve load infrastructureIn large organizations, it can be challenging to manage load infrastructure enable teams to collaborate and share test resources (such as load generators and VU licenses)
Tool ScalabilityScale to thousands or even millions of usNot all load testing tools can scale to accommodate significant tests with thousands or even millions of VUsGenerate millions of VUs – either on-premise or in the CloudSynchronize tens or many hundreds of controllers to ensure that load generator are synchronized rate test reports and analyses that aggregate this data for multiple controllers and load generators
Load Testing AnalysisAnalyze tested applications and identify bottlenecksMust have its own monitoring capability or at least be capable of importing monitoring data for analysis must be able to provide actionable decision-making information to identify and help resolve bottlenecks
Integration with CI PipelineIntegrate with Continuous Integration servers to automate performance testingDoes it provide standard integration with the most popular CI/CD servers? Does it offer an API that enables integration with third parties? Does it permit code-based testing for the integration of performance testing into a fully automated process? Does it provide an automated success/failure SLA result, so that the test result can be fully automated?
Integration with Other Testing ToolsIntegrate with other functional testing devices (e.g., Selenium)Through integration with other tools, the device lets you reuse existing, functional test scripts for load testing – speeding up the design phases “browser-based” performance indicators for the end-user experience
CollaborationEnable different teams to collaborate on performance testing process and results in analysisMust let you share test resources (scripts, results, etc.) as well as the test infrastructure itself, including load generators and VU licenses•Must support teams working together
SecurityEnsure the security of these data being created (user logins and personal info, etc.)Must encrypt the data it handles must be completely secure, with absolutely no backdoors that could compromise the safety of the data
Technical SupportSupport its customers in a variety of situations (different protocols and custom applications, etc.)Testers must be able to customize protocols based on VUs to create realistic testsIntegrate performance testing into CI/CD pipelines to enable automation must provide testers with best practice guidance

Are there different types of load testing tools?

There are several types of load testing tools, with varying yet complementary approaches:

  • Protocol-based tools: Generate protocol transactions at the application level. For example, requests on the HTTP protocol for web and mobile applications
  • Browser-based tools: Simulate the activity of real browsers, but for many different hundreds and even thousands of VUs

Load testing tools also come in different categories:

  • Tools-based on Open Source technology (JMeter): Simple use cases that do not require advanced testing capabilities
  • Advanced tools capable of providing a solution to suit even the most sophisticated needs: NeoLoad is one of the leaders and the main alternative to long-standing solutions like LoadRunner and Performance Center which have existed for decades

What are the main tools for load testing?

ToolDescriptionWhat need(s) are satisfied?
NeoLoadThe leading load testing toolSuitable for all testing requirements – from API testing to individual application testingDesigned for the enterprise market and excellence test centers, as well as Agile/DevOps teams
LoadRunner / Performance CenterWidely used in large organizationsMeets the needs of large organizations. Support complex use cases, but are dedicated for use by experts because they are just as complex to useBoth are expensive to implement/maintain
JMeterThe Open Source load testing toolWell suited to the basic requirements of load testing, but does not support legacy protocols such as SAP GUIRequires third-party business tools for cloud and mobile testing
BlazemeterBased on JMeterProvides reporting and cloud computing capabilities in addition to the open-source capabilities of JMeterLimited support for JMeter’s capabilities under complex testing conditions
sources:



Advertisement

Privacy Settings


Source link

Leave a Reply

Your email address will not be published. Required fields are marked *