top of page
Writer's pictureBeth Schrag

End-to-End Performance Testing Services

We have all heard of a website or system that got so overloaded that it crashed due to its inability to manage the influx of traffic. Performance testing verifies that your system will operate, as the load on it increases. It is typically done prior to a release to fine-tune the system and resolve any issues that appear before it goes live.


Performance testing allows you to examine several elements that determine success, including response times and potential errors. Armed with this performance data, you can easily pinpoint issues like bottlenecks, bugs, and errors and make informed decisions on how to best optimize your application.


For our largest online assessment system at Breakthrough Technologies, we start with 100 students simultaneously logging in and taking a test, and then incrementally increase the load to 120K students taking the test at the same time. We are verifying that the average response times are acceptable for each request the client (browser) sends to the server (our software). If the system response times are too lengthy, then a student taking a test may feel that something is wrong and their test results are not being accurately recorded. We determine a target load that we expect the system to be able to handle and we performance-test up to that point. This is usually 5 times the actual expected peak load on the system.


 

How often do we run performance tests?

This is determined by the contract that BT has with each client. For our clients with large scale assessments, we run performance testing once a year after the new features have been added and before the test window has opened. This should be ahead of the go-live date enough that there is time to make system changes to address issues that are exposed by performance testing.


 

What's our performance testing process?


Define our Scenarios & Expectations

First we define which user scenarios are the most essential to be performant and what performance expectation each scenario should have. This includes specifics about what steps each scenario will take. For our student test run performance tests, we specify that a different student will log in, select the appropriate test, and run it with a test submission at the end. Specifics about what type of test should be run can be included (item types, length, etc.). If the system allows for going back to view answers and that is determined to be a valuable step to measure, that step can be added as well. We have set the performance expectation for our biggest student run test to be 120K students all running the test at the same time with a 20-minute ramp up time.


Record and Create a Reusable Script

There are lots of tools that can help develop a script to use as a performance test. At BT we use 2 different tools for different projects, JMeter and k6. These tools record all the requests that are made between the browser and the server (the software we are testing) while actually doing it. This is done by setting the browser to use a proxy server and giving it a certificate that was made by the recording tool. Then, when the recording is started, by logging into our system and taking the test once, we end up with a recording of each request that is made and the response that is returned from our system.


The next step is to convert this recording into a script that can be executed repeatedly with different users. Some parts of the recording will remain the same in each run, while other parts will have to be modified. The challenge is determining which components need to be parameterized and from where to retrieve the corresponding values for each test run. For instance, usernames and passwords may be retrieved from an external .csv file, while test IDs and authentication values may be derived from a previous server response or calculated by the script itself. Every request sent to the server should be examined for values that need to be parameterized, and these must be set correctly. When this part is finished, the script can be run repeatedly with as many students as you need.



Set Up Environment

The environment in which the performance tests will run should be as close to the production environment as possible, so that the results are useful. This involves starting up the performance test environment and configuring it to match the production environment, installing the latest software on the server and any relevant software, and upgrading any subscriptions that are required to run the tests.

Set Up Test Data Any data that is required by the performance test script must be set up and tested. For the student run tests we execute, there are CSV files that contain login information for each student that will run the test. The corresponding data must be set up in the system database. This includes installing the test that each student will run, and creating the students, schools, classrooms, and test assignments for each of those students. Backups of the database are taken and utilized by a reset script to set the database back to a known good state. Once the framework is ready, real data is utilized to perform small trials, both on the test-driving platform and locally, to make sure that everything is operating properly.

Execute the Tests The tests are run starting with lower load tests and increasing the load level as each test is successful. If a load level test does not perform within our expected limits, troubleshooting is performed to determine the cause. If all load level tests are successful, that performance test is considered complete. Collect & Record Data After every test run (success or failure), data is collected, and results are logged. We save a screenshot of the AWS dashboard that shows the system performance during the test and the maximum CPU utilization for the application. From the test-driving platform, we record the peak and average response times, test duration, and start/end times. We query the database before and after the test and confirm that the correct amount of new/updated data exists. If New Relic is used, we record the database and summary dashboards as well.

Troubleshooting & Infrastructure Updates When a test result does not meet the expected performance requirements, analysis is done to determine what is causing the issue. We look at system logs to determine if an error was logged that can help explain why the system had difficulty. We analyze the discrepancies between the outcomes of the same assessment from the previous year, and look at slow queries and determine if they can be improved. We make changes to the infrastructure size, capacity, and tuning to correct the system performance.

Repeat We run the test again and see if the system performance is acceptable. Sometimes this is a lengthy process that takes multiple iterations to get right. When the tests all pass, the same changes are made in the production environment so that it is as performant as possible.

 


Tools

At Breakthrough Technologies, we use several different tools to accomplish performance testing. For the large-scale assessment platforms, we use JMeter to record and parameterize the script and to generate a .jmx file. RedLine13 is a cloud-based load testing system that we use to run the .jmx files. It uses the .csv data files and drives the script to run the number of students desired with a specified ramp up time. K6 is another tool that can both record a test script and run it for many consecutive users and we use that for various projects as well.

AWS CloudWatch dashboards are set up to show pertinent information about the system that is under test. New Relic is a cloud-based tool that also has dashboards that show system performance. Our infrastructure is hosted on Amazon Web Services (AWS), allowing us to inspect its components and carry out modifications via the AWS consoles. Most of the infrastructure is managed through Terraform, which is an Infrastructure as Code tool that allows us to create and change our infrastructure in a consistent and predictable way.

Sequel Ace is used to access and query the database as needed.


 
When Should You Consider Performance Testing?

Performance testing should also be carried out whenever a significant new system is implemented, or if existing systems undergo major upgrades. It is critical for validating system scalability and determining potential performance bottlenecks before an application is deployed in production. Organizations should regularly conduct performance tests to help proactively identify any problems that could lead to poor performance in the future. Ultimately, performance testing ensures a consistent and reliable user experience.

Our end-to-end performance testing services assure high responsiveness, availability, and scalability of your applications for the long run. Let us help you guarantee your applications run optimally and offer outstanding experiences for your customers. We are proud of the work we do, and we can't wait to help you reach the next level.







58 views0 comments

Comments


bottom of page