Blog

The latest expert opinions, articles, and guides for the Java professional.

Developer Productivity Report 2015: Java Performance Survey Results

Best Practices

Developer Productivity Report 2015 Java Performance Survey  best practices

Performance Testing Best Practices

In this section we only ever pivot over one question: do your performance issues tend to affect users in production? We always want this answer to be no. We want to find, fix and test issues before they reach production and certainly before they reach users. So, next we’ll look at the differences between those who do affect their users with performance related issues and those that don’t, to see if there are trends or best practices which we can learn from.

Who’s responsible?

To be honest, there was little in the data to suggest a trend existed one way or the other. Those who stated their users were not affected, said that whoever writes the code is responsible a few percent more than those who stated their users were affected, and that was the greatest difference in the stats. As a result, we can say that these numbers aren’t telling us whether it’s an advantage or disadvantage as to who’s responsible.

Application complexity

How about the complexity and size of the application that our respondents work on? That must play a part in how many issues affect users. First, let’s start with how big the team is, which is also a measure of how big the application is. We can see from figure 3.5 that those who say their users are affected by performance issues have around 23 people in the team, while there are only around 16 for those who aren’t affected. Said another way, teams with apps that suffer from users seeing performance issues are 45% larger than teams that have happy users not suffering from performance issues. It’s hard to say why the team size makes an issue, as there are likely many aspects that could affect this. You’d like to think that with more resources available, there would be more availability to find someone to run more performance testing, but clearly that gets squeezed, perhaps for more functionality as is often the way.

Developer Productivity Report 2015 Java Performance Survey  figure 3.5 how many people are in the team that design develop and test and maintain your application

The complexity or size of an application is also measured in this survey by the number of screens which an application has. Here, in figure 3.6 we see substantial differences, with those seeing performance issues by their users have applications with an average 130.4 screens, while those without users complaining of performance issues have a lot less, 81.7 screens. This means those applications with angry users have 60% more screens than applications with happy users. This could be a sign of a complex application, or perhaps it’s more a sign of an application that takes longer to test as it’s simply larger. This would result in more time needed to run performance tests, which might not always be available in an application release cycle.

Developer Productivity Report 2015 Java Performance Survey  figure 3.6 how many different screens/views does your application have

It’s all About the Timing

Again, we can see a big divide in when performance testing is done. It seems that by doing more performance testing earlier in the release cycle will have an impact on your end user, as we can see from figure 3.7. Those with unaffected users test while they code 36% more often than those with users who are affected by performance issues. This trend goes all the way through to production, whereby those who have sad users test more in production than those with happy users. Although it’s probably too late by then as the bugs have already been let loose.

Developer Productivity Report 2015 Java Performance Survey  figure 3.7 at what stage do you perform profiling and performance tuning on your application

Should we blame their tools?

In short, no – we absolutely shouldn’t! There was very little difference in the tooling statistics across the board. Teams that report users do not suffer from performance issues in their applications are 20% more likely to use custom in-house tools than those who have users which do suffer performance issues with their application. This could again point to the fact that those teams more capable or with the time and expertise to write their own custom tooling are going to be more likely to performance test most accurately with higher expertise and with more time. This could well be a signal of the kinds of people who write those kinds of tools.

How About the Fix

This is not as much of a best practice as an observation. For instance, if we notice a difference in time here – as to how long the fix takes to make and test – we can’t just say the solution is to fix and test twice as fast and your users won’t see performance issues anymore! However in figure 3.8 we can see that there is a big difference. In fact, it takes those whose users are affected over 60% longer to diagnose, fix and test a bug than for those whose users don’t see performance issues. This could well directly correlate with the phase in which this testing is done, linking back to the results in figure 3.7, which supports this claim.

Developer Productivity Report 2015 Java Performance Survey  figure 3.8 when you find an issue how long does it take in days to diagnose fix and test on average

What’s the Root Cause?

If we now look at the root causes of the issues that occur, we can see a focus around all things database. In fact, that’s pretty much the only trend we found, so let’s just look at this in figure 3.9. Let’s start with the database itself. We’re only looking at low numbers, so the difference isn’t too great, but those with users affected by performance issues are 28% more likely to have speed problems in their backend database than those with users not affected by performance issues. Database query issues were a much more common problem and a similar split can be seen. Performance issues due to there being too many database queries are almost 30% more likely in applications where users suffer from the performance issues than those that don’t. Similarly slow database queries are 36% more likely in applications with users suffering performance issues compared to those that do not. This is substantial evidence into database performance and interactions being a key component to user happiness.

Developer Productivity Report 2015 Java Performance Survey  figure 3.9 what are the typical root causes your most often experience

Big bang, or Little and Often?

The final metric we’ll look at in this section is how the frequency of performance testing affects end users. We can see from the graph in figure 3.10 that there is a trend once more. Those applications that have users affected by performance issues are less likely to test frequently. Those who test at least every week, are 25% less likely to have affected users. On the other end of the spectrum, it’s the applications whose users do see performance issues that are most likely to be profiled on a yearly basis or even less frequently.

From this, we can say that the frequency of profiling does have an impact on whether end users are affected by performance issues in production. This graph shows answers to the frequency question which were of a regular timescale. I.e. Weekly or yearly. For those with affected users, only 45% of their responses to the question picked one of these regular timescale answers. For those whose users are not affected, this number rises up to 55%. This means profiling every x days is a beneficial thing. And as that number gets lower, the benefits increase. Let’s take a look at how the remaining respondents answered next.

Developer Productivity Report 2015 Java Performance Survey  figure 3.10 how does frequency of testing affect happy users

Testing Reactively

Figure 3.11, which looks at the other answers available to the same profiling frequency question is heavily weighted to the answer: When we see issues. This is a very problem-reactive solution. That is to say that testing is performed only when an issue occurs. To clarify, profiling isn’t done at a certain time to see if bugs exist, rather it’s done as a result of a bug already having been found. This means we can label this option as a reactive measure. The flipside of this is a proactive method, in which profiling is done regularly as a measure to find potential issues, not react to them. We can see that those whose users are affected by performance issues are 23% more likely to adopt the reactive method of profiling when an issue is found, than those whose users are not affected by performance issues.

Developer Productivity Report 2015 Java Performance Survey  figure 3.11 how does frequency of testing affect happy users


DOWNLOAD THE PDF

No Responses

No comments yet.

RSS feed for comments on this post.

Leave a comment