The latest expert opinions, articles, and guides for the Java professional.

Developer Productivity Report 2015: Java Performance Survey Results

Summary & Conclusion

Developer Productivity Report 2015 Java Performance Survey  summary and conclusion
Wow, we’re at the summary already! That went quick. It must have been because of all the amazing graphs and humorous jokes that made the time just fly by. For those of you who need a refresher (or who just want to see the highlights), let’s check out what we learned from the data.

TL;DR – For those with limited time.

We released a survey in March 2015 focused on Java Performance. Here are some stats:

  • A total of 1562 amazing respondents completed our 19 question survey which meant we could write this report, so thank you!
  • ZeroTurnaround, the sponsors of RebelLabs, donated $1000 to the Dogs for the Disabled charity. Great job survey filler outers, you did great!

Here are some of the important results and highlights from individual survey questions:

  • The majority of our respondents were software developers, working on web applications.
  • On average, over nine out of ten respondents say it’s the developers who fix performance issues, regardless who finds them.
  • Almost half of the respondents profile only when performance issues arise.
  • 20% of teams write their own custom in-house tooling to run their performance tests.
  • Almost 50% of teams use multiple tools when testing.
  • The most common root causes of performance issues are slow database queries, inefficient application code and excessive database queries.
  • It takes just under one working week on average to diagnose, fix and test performance issues. Coincidentally, the exact same amount of time that most managers take to reply to super urgent emails.
  • Three out of every four respondents state their performance issues affect the end user.
  • On average, five and a half performance issues are found during each application release.

If we look at just the responses given by those who state dedicated performance teams are responsible for testing, we uncover some interesting findings.

  • Dedicated performance teams are more likely to find a greater number of issues than any other team. Over twice as likely when compared to the operations team for example.
  • Dedicated performance teams spend on average almost 50% longer diagnosing, fixing and testing performance issues, compared to other teams.
  • Dedicated performance teams spend over 40 days, on average, diagnosing, fixing and testing all performance issues they find each release, compared to a software developer who spends just over 20 days.

Here’s what we learned about the tools our respondents use:

  • JProfiler and custom in-house tools find more performance issues than any other tools. XRebel comes in third, leading a chasing pack.
  • Using a greater number of tools in total will increase your chances of finding more performance issues – than sticking to just one.

We performed the same pivot across application complexity and understanding the performance test process that teams followed:

  • As application complexity increases, the number of database query issues and HTTP session issues also increases, while other root causes tend not to increase.
  • Profiling your code frequently will give you a greater chance of a performance boost, compared to profiling at a less regular interval.
  • As application release cycles get longer, the data suggests that people do spend additional time performance testing.

We also compared the activities of those respondents who claim their users are not affected by performance issues and those whose users are affected. We found that teams who have the happiest end users:

  • Work in smaller teams – 30% smaller teams that design, develop, test and support the application.
  • Have less complex applications – 38% fewer application screens/views.
  • Test earlier – 36% more likely to run performance testing while they code.
  • Are more efficient – 38% faster at diagnosing, fixing, and testing performance issues.
  • Look after their databases – 20-25% less likely to have database query or slow DB issues.
  • Are more proactive – Almost 40% more likely to profile on a daily or weekly basis.
  • Are less reactive – Almost 20% less likely to test reactively when issues arise.


No Responses

No comments yet.

RSS feed for comments on this post.

Leave a comment