The latest expert opinions, articles, and guides for the Java professional.

Effective performance management with the performance pipeline

Performance testing is an important but poorly executed discipline. Sometimes this is because the teams lack the experience needed. Perhaps there isn’t enough time in the release schedule to perform enough testing. Whichever excuse is used, the results are still below par. This whitepaper introduces the problems that many experience with performance testing, including tips on how you can improve the way your team runs performance tests.

Download Performance pipeline whitepaper!

Breaking the issues down

First of all, let’s understand the types of performance issues developers have to fix in their applications. According to the RebelLabs Performance Survey, the 5 most common performance issues are as shown in Figure 1.

The multiple choice question asked in the survey was, “What are the typical root causes you most often experience“. The first three issues are by far the most common causes of performance problems and are responsible for the majority of user impact. Interestingly, these are also issues that don’t require load to reproduce and could be found as developers write their code.

In this whitepaper, you will discover how performance tools XRebel and XRebel Hub can eliminate these types of performance bottlenecks without needing to run large scale, complex load tests. In fact, you can test for these types of performance during the development stages.

What is the problem?

Building software is hard. Writing the code is not the hard part, it’s delivering a working solution to end users on time. To achieve this, multiple aspects of the software project need to be considered:

  • Functionality is needed to deliver any value at all
  • Quality is needed to prevent critical bugs
  • Performance is needed to be able to serve all users
  • Security is needed to prevent breaches and injections

Every one of these aspects is typically covered with investment, descending in the same order. The classic software mantra is “make it work, make it work well, make it work fast”. Often, the “make it secure” stage is missing, which no doubt annoys many security practitioners.

Performance issues are discovered too late!

Performance is third on the list and thus only receives any attention left over after the functionality and quality goals are met. In many development shops this left-over time amounts to zero, or very close to zero, perhaps with the odd emergency fix to stop part of the application grinding to a halt in production. Therefore the real challenge is trying to improve performance as cheaply and effortlessly as possible.

Testing earlier in the cycle may seem like a tough ask, particularly when your performance testing may comprise of complex load and stress tests on machines similar to those used in production. The key is separating those tests that can be run outside of a more complex environment and running them in a more suitable scenario. Running the right tests at the right stages is one of the core values of the performance pipeline, a performance model that sets out to improve application performance by shifting your performance testing left.

This whitepaper will introduce the performance pipeline model and demonstrate how you can use performance tools from ZeroTurnaround to test and discover performance bugs when they are at their cheapest and easiest to fix.

The Performance Pipeline

The performance pipeline model sets out to fix the performance problem by baking performance into the correct development stages — to make testing cost-effective and effortless. The performance pipeline focuses on various aspects of performance testing at every stage of the application lifecycle. You will need to be smart about what can be achieved at each stage. For example, chasing milliseconds during development isn’t effective, as the production environment will be considerably different.

Test early

The benefits of early testing are widely accepted among most software development teams. After all, we run quality tests while we write code, so why shouldn’t we test for performance at the same time? The findings from an IBM Systems Sciences Institute research paper show that the time spent on fixing bugs grows exponentially the closer you get to production. It’s therefore key to perform as much testing as you can — as early as possible.

The five stages of the Performance Pipeline

This section will explore what you should be thinking about and testing at every stage of the application lifecycle. We will also explore how performance tools from ZeroTurnaround can help you achieve this at the various stages. For more information, be sure to visit

Requirements analysis

The requirements analysis phase isn’t the time to test for performance, obviously. After all, you don’t even have an application or even a design. However, it is the phase in which you can make demands about the performance you expect from the application. There are no tools that can help at this stage, but we do recommend some best practices on the Performance Pipeline site.


The design stage is very similar in that there isn’t any code that you can really test, unless of course you choose to prototype some code to verify design ideas. There are a number of areas that must be clarified in the design stage that outline how your application should be developed. These include decisions around concurrency, resource pools, resource access, data fetching strategies and much more.

Again, tools don’t really exist to help out in this stage, so we’ll point you towards some of the design tips on the Performance Pipeline site and move on to look at some performance tools that can help you in the next stage.


The development phase is the first time you get to quantifiably test code that could make it into production. The important question is, what should you be testing and what shouldn’t you be testing in this phase? The environment that you run during development will be quite different from the one in production. So you have to be careful with what and how you test. For instance, performing load testing too early may result in false bottlenecks existing in an area of your application that simply doesn’t appear in production.

Not all performance issues require load to reproduce. You can start looking for those problems earlier in the development stage, before the test phase. Thus, by discovering the potential issues earlier you can minimize the risks associated with bad performance, consequently making it cheaper to fix.

Another thing to note is that when talking about improving performance, most folks will think about performance tuning or performance optimization. This might include reengineering algorithms, or coming up with clever hacks to speed things up and tune systems to get the maximum output from hardware and software. However, in most cases the biggest bang for the buck is just fixing the bugs that cause performance degradation rather than trying to improve healthy code.

XRebel – performance tool for Java development

XRebel gives your developers real time insight into their web apps during initial development, helping them identify and resolve application performance bottlenecks early on.

By providing awareness of performance associated with each transaction, XRebel reduces the efforts associated with performance management and debugging. The most tangible benefit for your development team is the reduced time spent analyzing, reproducing and resolving production performance issues. Since XRebel helps to detect and fix the most common issues before they reach production, that time can now be spent on completing other tasks quicker, increasing your overall team productivity.

“The issues I found using XRebel would have been very expensive had they gone to
production. It would easily compensate the license costs by a factor of 10 or more.” — Rolf Schenk CEO, Joytech GmbH

Real time feedback

XRebel is designed to be used by individual developers as a non-intrusive tool that notifies the developer about anomalies detected during manual testing. The developer makes changes in the source code and periodically runs the application to validate new functionality. This is the moment XRebel comes into play. When a performance anomaly is detected, the XRebel toolbar will notify your developer about the problem: whether the request latency exceeds the predefined threshold, or when too many IO invocations are detected. Generally speaking, XRebel answers the question: “What just happened with the application?”

Request latency profiling

Latency has the most direct impact on the user experience and hence this is the first thing that you would naturally inspect. However, it is not enough to find the slow request, it is also important to know why it is slow. Where did the application spend most of its execution time? XRebel helps you to make sense of the application structure and which layers are involved in the execution. And it is very easy to locate the most time-consuming methods within each request.

Monitoring external invocations

Issues with slow database queries and excessive database access are the most common problems developers encounter. XRebel visualises the collected traces in a convenient manner so that it is very easy to locate problems with the database access layer. Because of the way how XRebel groups similar queries, detecting problems with N+1 selects becomes trivial. And thanks to a dedicated framework integration, i.e. Hibernate, the root cause of N+1 selects problem is also located in no time. In addition, XRebel supports a variety of NoSQL databases (including MongoDB, Cassandra, HBase, Redis, Neo4j, and Couchbase) as well as monitoring HTTP calls.


As you get closer to production, the testing will get more involved. The three main areas of testing are automated test suites during CI, traditional performance load/stress testing in system test and staging.

Continuous integration testing

There are a couple of approaches in which you can include performance testing at this stage:

  • Use your existing tests suites. It’s common to have amassed a large number of functional tests which run on each build during CI. Reuse these tests by monitoring the key interactions.
  • Track your build regressions. When there are significant differences between the performance behaviours of two recent builds, investigate the changes between two builds to determine if such a performance increase is warranted, based on the new behaviour.

Load testing & Staging

During staging, more robust performance testing can be performed as you’re now in a cloned production environment, or as close as you can be. You will want to start generating some synthesized load on your application as the resulting metrics will be closer to real world results. There are many aspects to ensure that your staging environment is equal to your production environment, including hardware and software mirroring. Consider replaying production load on your staging environments, if you find that synthesized load is not providing you with reliable results. Also, try to make your data as production like as you can make it, if you cannot mirror your production data exactly.

XRebel Hub – catch performance regressions during testing

As we saw in the Development stage, your developers can use XRebel on their own workstations. However, to track regressions you need to store the results and visualize the historical data, analyze and triage issues. XRebel Hub is designed exactly for such scenarios.

XRebel Hub runs in continuous integration (CI), test and staging environments. You can use your existing manual and automated tests to generate activity for the application.
XRebel Hub collects the performance metrics using and sends the traces to a central repository for further analysis. The slow requests and excessive IO are flagged for review while severe regressions and poor performance from new functionality trigger daily notifications.

There are 3 user experience aspects that XRebel Hub monitors: response and processing times, IO operations and exceptions.

Here’s why:

  • Latency has a direct impact on user experience: slow equals annoying.
  • IO count indicates the load on infrastructure. Requests with too many IO operations might perform fine during functional testing and not produce latency alerts. But they can easily become sluggish or start to fail under a real production load, with concurrent users.
  • Exceptions mean that the normal flow of the application gets disrupted. Not something you want your customers to experience.

The XRebel-Hub dashboard

You can view an overview of performance changes for all components, gathered in a single XRebel Hub dashboard. You can then review the problems, assign tasks and share root cause information within your team. Once the issues are detected, your developer needs to understand what to do with them. This is where having a full profiler is indispensable. XRebel Hub analyzes the call trees of problematic requests, finds hot spots, pinpoints what causes a latency increase or what spawns excessive IO calls. The stack trace of an issue request is compared to a previous run, with problems highlighted, all the way down to the method level.

Debugging issues

XRebel Hub is there for every step of the issue’s life cycle: detect — understand — fix — confirm the fix. It is designed to find code-related performance issues early in the process and throughout the application life cycle, including continuous integration and testing cycles.


Now your application is in production and this where the rubber meets the road. Even a well-designed and well-coded application will be subject to an entirely different set of factors that can and will affect performance. In this phase, you need to live up to your Service Level Agreements (SLA) and meet the expectations set in the requirements analysis phase.

To ensure that SLA requirements are met, IT will utilize various solutions to analyze, measure and then provide reports, usually on a monthly basis. Production-focused solutions like application performance management (APM) and flow analysis provide in-depth data on the underlying causes that affect application QoS, but they only reveal that an issue exists.


In this whitepaper we described the performance pipeline, a concept of mapping performance related work and activities towards the stages of a software delivery pipeline. The main idea behind the performance pipeline is to make sure that the development team is aware of the performance of their product throughout the full length of the delivery process. Being aware of the performance of your application and taking steps not to introduce performance regressions is a continuous process. You can ensure reasonable performance at every stage of the delivery pipeline. Test proactively, rather than solve performance problems your users reported to you after the fact.

Requirements Analysis — determine the performance requirements for the system that are needed to carry out the operation.

Design — ensure the architecture you design isn’t affecting your performance.

Development — catch the most obvious performance issues, typically related to database access, networking, and inefficient application code.

Test — make sure that the general performance of your code hasn’t regressed. Test the system in a production like environment with a production like load.

Production — monitor the performance of your system, prevent performance degradation by scaling the system, gather performance stats to feed back into development.

The Performance Pipeline model helps us to map the relevant tools that we can use at different stages to discover performance related issues.

During the development phase, you can use XRebel to pinpoint the most common issues by gaining the real-time feedback to application behavior.

XRebel Hub takes the feedback one step further by discovering the regressions which are very useful in testing stage of the project.

The “Effective performance management with the performance pipeline” post is also available in a pdf form, just click the button below to download it.

Download Performance pipeline whitepaper!

No Responses

No comments yet.

RSS feed for comments on this post.

Leave a comment