Blog

The latest expert opinions, articles, and guides for the Java professional.

Developer Productivity Report 2015: Java Performance Survey Results

Part 2

Holy pivoting, Batman!

Parts 2 and 3 of this report are both focused on pivoting the data we have, allowing us to get answers to more specific questions. But first of all, what is a pivot? Well, when we talk about pivoting data in this report, we’re really talking about a clause in a question. Let’s say we have a sports league with a big table of results, including information about win or loss for each game and whether the game was played at home or away. We might ask a question like: how many games has the legendary baseball team HitBallFar won at home? We’d need to first filter the results just for the team based on the name HitBallFar, then filter just the home games and count all those games which are marked as a win. This is pretty much what we’re doing for the survey data we have, typically filtering across 2 survey questions at a time to try and find trends in the data. In this section of the report, we’ll be focusing on teams within an organization and the application itself. Let’s get pivoting!

Teams

Developer Productivity Report 2015 Java Performance Survey Teams

Teams and their Structure

First of all we’ll concentrate on the teams that are responsible for the performance testing, the people who actually run the monitoring and who find the issues internally. Our first question is around the role of the people who are responsible for the testing. Is it also their job to fix the issue?

Figure 2.1 shows the percentage of people in specific teams who are responsible for performance testing that also carry out the fix when they find an issue. Within development teams 96.4% of developers who find the issue will fix it themselves, while for QA teams this drops massively to a mere 7.1%.

Approximately one in three dedicated performance teams will make the fix once they find the problem. This figure increases slightly for the Operations team to almost 40%. Of course different teams might find different problems, and the fix might be a coding fix for QA whereas it’s a configuration fix for the operations team. Overall, it’s the development team which takes on the brunt of the fix.

Developer Productivity Report 2015 Java Performance Survey  figure 2.1 are the people who monitor and find performances issues the same people who fix them

Are dedicated performance teams better?

Next we’re going to zoom into the dedicated performance teams by themselves, and we’ll look at whether they achieve greater success compared to other teams. After all, it is their role. One rough measure of success is the number of issues you find in the application. It’s rough, because different issues have different impacts and can be more meaningful than others, but we’ll use this number as a guide. The line graph in Figure 2.2 shows how the results for a dedicated performance team vary compared to the overall average which we discovered back in Part 1, Figure 1.19. This is represented as a percentage increase or decrease relative to the results from other teams. We can see a definite trend that dedicated performance teams do tend to find more issues than other teams who run performance tests themselves. We see this because their likelihood to find either 0 or 1 bug(s) decreases by up to 10% and the chance of finding 6-19 or 20+ bugs rise by up to 6%.

Developer Productivity Report 2015 Java Performance Survey  figure 2.2 do projects with specific engineering teams let fewer bugs impact the user

Do different teams prefer different tools?

We compared the tool selection across all the teams and found there was little difference between tool usage for those that are available on the market, however some teams were more likely to create custom in-house tools. In fact, if you were in QA, a dedicated performance team or Operations, you’re up to 40% more likely to use custom in-house tools compared with other teams. Another interesting statistic from the data is for those groups who state nobody is directly responsible for performance testing. 30% of respondents will not use tooling whatsoever for their performance testing – sad panda. Presumably, those without performance tools just rely on their logs, programming by coincidence or simply no testing whatsoever!

Do different teams test at different times?

The data shows us that developers do prefer to fail early, so they tend to do more performance work to the codebase while they write code – compared to other teams. This is pretty much expected. However for those where nobody is directly responsible, almost 20% of votes go to testing during the development phase, third highest out of all the teams. They also do the lowest amount of performance testing through CI, snapshot builds, system/integration testing and staging. It changes when we get to production, as we see dedicated performance teams are second highest in that category with just over 18% of the share. QA and the dedicated performance team have more focus on CI, snapshot builds and system/integration test phases than any of the other teams.

Who spends longer testing?

In terms of which team spends longer fixing bugs, the information is quite varied as can be seen from the table in figure 2.3. The Operations team and “Whoever writes the code” category provided the most votes for either no performance testing whatsoever or just 1-2 days of testing during a release (when responsibility was claimed by a team). Most teams favored the 1-2 days option, although an exception to this was the dedicated performance team whose modal average was 2-4 weeks, with over 28% of the responses. The QA team and Architects were also strong in this area, although not quite as strong as the performance team.

Developer Productivity Report 2015 Java Performance Survey  figure 2.3 table How much time, per release, do you spend on  profiling and performance monitoring


DOWNLOAD THE PDF

No Responses

No comments yet.

RSS feed for comments on this post.

Leave a comment