Blog

The latest expert opinions, articles, and guides for the Java professional.

Developer Productivity Report 2015: Java Performance Survey Results

Which teams catch more bugs?

Next we can look at which teams catch more performance related bugs. As shown in figure 2.4, our dedicated performance team caught the most bugs with architects following close behind, and developers not far behind them. Interestingly, the QA teams struggle a little here and Operations teams struggle even more, catching only half as many bugs as architects or dedicated performance test teams would. It is of course worth mentioning again that different people might be looking for different types of bugs etc.

Developer Productivity Report 2015 Java Performance Survey  figure 2.4  how many performance related bugs do you find in a typical application release

Who spends longer fixing bugs?

There are a couple of things we can look at when we focus on the time taken to fix bugs. Firstly, there’s the time taken for each team to fix a single bug and secondly, there’s the total time each team spends in a release fixing bugs, based on the number of bugs they find. Let’s focus on the former first.

The line graph in figure 2.5 shows how each specific team compares with the overall average for diagnosing, fixing and testing a single issue as a percentage (remember the overall average is 4.38, as we discovered in figure 1.16). We can see that when we fail early it’s far cheaper to fix issues, assuming all issues are of equal complexity. We also notice the huge spike that almost pokes us in the eye! This shows us that a dedicated performance team takes almost 48% more time to diagnose, fix and test issues compared to the average.

Developer Productivity Report 2015 Java Performance Survey  figure 2.5 when you find an issue how long does it take in days to diagnose fix and test on average

Who spends longer fixing bugs in a release?

Now, what does this mean in terms of the total time spent fixing issues in a full release? Well, we already know the number of performance related issues that are found per team, which we saw in figure 2.4. We can multiply those numbers with the time it takes to fully diagnose and fix the issues for each team. In figure 2.6, the bar graph shows the full time in days, spent diagnosing, fixing and testing all performance issues in a release. We can straight away see a dedicated performance team spends well over twice as long diagnosing, fixing and testing performance issues compared to a regular developer. We’ll also notice that developers spend the lowest amount of time to fix issues in a release, even though they find more bugs than the average. However, they’re able to spend less time overall in a release on the issues as their turnaround time for fixing a performance issue is so much smaller, assuming they do the same quality job of the fix.

Developer Productivity Report 2015 Java Performance Survey  figure 2.6 how much time in total do teams spend diagnosing fixing and testing performance issues per release

Why should I care?

That’s a great question! We’re not asking you by the way, so you don’t need to answer. What we’re actually asking is: how much do different teams care about their performance testing. It is commonly accepted that testing early and often will find more issues and provide cheaper fixes. In fact, figure 2.5 is the perfect evidence for showing the cheaper fix claim in numbers. Next, we’re going to see if different teams look upon performance testing in different ways – whether they choose to do it periodically or reactively. This should give us an idea as to who treats performance testing as a first class citizen in the application release cycle.

Developer Productivity Report 2015 Java Performance Survey  figure 2.7 table frequency of profiling

Check out the table on the next page, figure 2.7. One thing which jumps straight out is the dedicated performance team. They are almost twice as likely than any other team to profile their code daily. Developers are most likely to profile as they code, which is most understandable as they’re most active during the development phase anyway. Dedicated performance teams are also 50% less likely to profile code when issues arise, compared to other teams. We might read into this that dedicated performance teams are less reactive to issues that arise, as they’re more likely to profile code and find issues more regularly.

Dedicated performance teams, QA and those in senior development roles also have a stronger preference to profile monthly. This might point to these teams using a milestone or snapshot build which would also be used during integration or system test cycles. Overall, it does look like profiling is a very reactive activity, which is quite a common way in which this style of tooling is used. It will be interesting to see how XRebel disrupts this market given its primary focus is usage in the “As I write it” category to promote the “fail early and often” mentality.


DOWNLOAD THE PDF

No Responses

No comments yet.

RSS feed for comments on this post.

Leave a comment