The latest expert opinions, articles, and guides for the Java professional.

Developer Productivity Report 2013 – How Engineering Tools & Practices Impact Software Quality & Delivery

So who cares about quality software and predictable deliveries anyway?

In truth, we all should. You might be “Developer of the Year”, but if the team around you fails to deliver quality software on time, then it pays to review your team’s practices and tools to see if that is somehow related. So that’s what we did–and guess what: the things you do and the tools you use DO have an effect on your software quality and predictability to deliver. And we’re going to find out how.

How do the practices, tools and decisions of development teams have an effect on the quality and predictability of software releases? Seems a bit abstract, right?

In fact, this is one of the most frustrating aspects of talking about the quality and predictability of software releases. We all hear a lot of noise about best practices in software engineering, but a lot of it is based on anecdotal evidence and abstractions. Do we ever see much data related to the impact of so-called best practices on teams?

With our survey, the goal was to collect data to prove or disprove the effectiveness of these best practices–including the methodologies, tools and company size & industry within the context of these practices.

Download the pdf

Our data and metrics

In the end, we collected 1006 responses, which is reasonable for a survey where all questions are *required–last year over 1800 developers did at least half of our survey on tools and technologies.

Note: It seems that getting good responses to surveys isn’t easy–most people find a 2-3 question survey palatable, but aside from just dumb numbers it’s not easy to learn much from someone in just a few seconds. We narrowed down our scope to 20 questions, which took our development team about 5 minutes to finish the one-page form. Still, we didn’t see a flood of respondent participation.

So what metrics did we decide to track in order to understand how best practices actually work?

  1. Quality of software – determined by the frequency of critical or blocker bugs discovered after release
  2. Predictability of delivery – determined by delays in releases, execution of planned requirements, and in-process changes (aka “scope creep”).

After ascertaining that Quality and Predictability where two areas in which data could be gathered, we continued with further analysis based on tools used (e.g. VCS or CI servers), practices employed (i.e. testing, measuring, reviewing) and industry & company size.

Quick note about bias: When analyzing the data, we discovered a couple of areas where bias was present. Compared to the software industry as a whole, our respondents represent a disproportionate bias towards Software/Technology companies as well as Java as a programming language.

A little history: ZeroTurnaround’s Java and Developer Productivity Reports from 2009 – Present

If you’ve been following ZeroTurnaround and RebelLabs for a while, you’ll know that this is our fourth report in as many years.

It started back in 2009, when we began our quest to understand developer productivity by looking at which Java EE application servers/containers that 1100+ developers were using and how much time drain from redeploys is associated with each one (we discovered that between 3-7 work weeks each year were lost to this process).

In 2011 we expanded our research efforts and this time asked approximately 1000 developers about Build Tools, IDEs, Frameworks and Java EE standards in addition to App Servers, and again asked about how much of each hour was lost to restarts. We also asked ~1250 developers in India about tools and productivity, and saw some interesting differences between India and the rest of the world.

By 2012, we wanted to go even further. Our Developer Productivity Report 2012 focused on the vast array of tools & technologies that developers use each day, and looked deeper into what makes developers tick, asking about developers’ work week, stress and efficiency. Releasing this report was, in many ways, the unofficial birth of RebelLabs and the idea that high-quality, educational technical content is something we should continue to focus on.

So where does that leave us for 2013 and beyond? Issuing another report on the popularity of IDEs, Build Tools, CI servers, Web Frameworks and Application Servers was one idea–people loved our 2012 report. But would learning that Vaadin jumped 1% in popularity from 2012 to 2013, or confirming that Eclipse, Subversion, Jenkins, Spring MVC and Tomcat are still #1 in their respective fields truly be of value to the Java community as a whole?

Instead, we looked to cover the more difficult areas, looking at how tools and practices affect organizations as a whole–namely with the Quality and Predictability of software releases. It’s our goal to be the premier provider of Java development productivity statistics outside of dedicated research agencies, and we’re completely transparent and honest about our data. We admit bias. We publish our raw data for your own analysis. So we set down some goals for how we would proceed.

Moving forward, let’s go to Part I, where we discuss why it’s hard to measure quality and predictability, and what we did to quantify these metrics.

Download the pdf

Responses (28)

  1. Avatar  

    Vjatseslav Rosin

    September 19, 2013 @ 10:46 am

    I’m impressed that FogBugz is not that popular (just 1.7%). We are using Jira, but we were very close to chose FogBugz when we introduced new generation issue tracker.

    Would be interesting to hear reason why FogBugz is not that widely adoped.

  2. Avatar  

    Vjatseslav Rosin

    September 19, 2013 @ 10:47 am

    Also totally agree on “having Daily standups and Team chat seem to be the best ways to communicate”

  3. Avatar  

    Loïc Prieto

    September 19, 2013 @ 2:00 pm

    As always, a very interesting report! Thanks for the effort, even if I know all is focalized on selling JRebel. But the read is nonetheless very illustrative and helps me deciding wich tecnologies to learn next, so thanks a lot!

    Also, i would love to try JRebel at my work, but i know my bosses will never approve using money for development that could be used for their travels and bonuses… And believe me, i’ve already presented the ROI document you prepared. I think this is one of those situations where a loss in predictably is produced by tech decisions being taken by managers that do not know jack about nothing.

  4. Avatar  


    September 20, 2013 @ 7:41 am

    Nice report! It kind of confirmed my gut feeling on what impacts quality and delivery.

  5. Avatar  

    Oliver White

    September 20, 2013 @ 8:24 am

    What made you choose JIRA over Fogbugz in the end? Was it a single factor?

  6. Avatar  

    Toomas Römer

    September 20, 2013 @ 8:29 am

    Probably FogBugz doesn’t have such a spread in the developer community compared to BitBucket and Github. It doesn’t have a free plan or it is quite limited. Also BB and GH have so many OSS projects that people have familiarity with BB and GH and this will guide their decision when they need to pick a VCS service.

    I lately read a tweet which said last year GitHub was still the new thing, this year already a standard :). I wouldn’t go to such extremes but kind of true.

  7. Avatar  

    Oliver White

    September 20, 2013 @ 8:32 am

    Thanks for the compliments! Maybe showing them this report would help your managers understand that both practices AND tools can have a significant effect on your ability to deliver quality software on time.

    At the same time, I’m sorry you see this report as being focused on JRebel. The fact of the matter is that it’s JRebel that pays for RebelLabs to exist and produce all the reports, articles, blogs and surveys that we do. For example, let’s imagine that a report takes 100 hours of just 1 engineer’s time. You see how expensive it can be ;-)

    We created RebelLabs to be a distinct research & content brand outside of the ZeroTurnaround technology products group. We’re a ‘thought product’, if you think of it that way. So you’ll have to hear about JRebel and LiveRebel from time to time with us over here at RebelLabs, unless you suggest we start selling reports for 3 USD per download! ;-)

  8. Avatar  

    Vjatseslav Rosin

    September 20, 2013 @ 9:18 am

    We really liked FogBugz UX-wise. It’s really clear, simple and straightforward. But I think final factor was a price for on-site installation. JIRA was a way cheaper.

  9. Avatar  

    Loïc Prieto

    September 20, 2013 @ 9:35 am

    Hi! Thanks for answering, Oliver.

    I’ve expressed badly myself, I’m sorry. This report was clearly not focused on JRebel, but rather I see it as a means to an end, even though the information conveyed in it remains 100% useful and interesting. Now, I don’t mean it in a bad way at all. We’re all working with money that comes from selling something, be it a product, or our services. I’m not contrary to the idea, of course, being myself a professional programmer. In fact, all of the reports I’ve read here in RebelLabs have been very informative and neutral, and closely matched my own experience with the analysed products (or showed me new ones to look at). I think they’ve enriched the community as a whole, so i would like to thank and compliment the team on the very good work.
    I fully understand the situation that you’re being payed by ZeroTurnAround to produce the blog and the reports. The “thought product” you mentioned. And I find it quite nice, because money spent on PR this way is, as I’ve said before, giving to the community freely is a good thing to do. I’ve always found that “positive egoism” is the human trait that moves forward society much better than charity and good will.

    I thoroughly enjoy the articles and reports, with the geeky humour and useful information. (I feel like i’m being targeted by these reports, and JRebel ads with the unicorns and rainbows) So keep up the good work!

  10. Avatar  


    September 25, 2013 @ 2:04 pm

    I haven’t seen anyone who can accurately predict delivery time for projects beyond a couple weeks. There is fixed scope, flex time and flex scope fixed time. Fixed time fixed scope isn’t realistic. That’s the whole point of scrum and burn down charts, to give constant updates on scope or time to the people who care. At the end of a sprint if your prediction is wrong you change the scope if you have a fixed time release. During the sprint you can tell your off track by looking at a burn down or burn up chart. Tools like pivotal tracker are used to average your estimations so you aren’t predicting when things will get done, the tool takes into account the error in your estimations by averaging how many points you complete in the last few sprints and how many points you’ve estimated tasks to be, and provides estimations based off your average.

  11. Avatar  


    September 26, 2013 @ 1:09 pm

    Thanks for the good stuff!!! btw. Have the trends been mixed in the figure: The effects of pairing up on quality and predictability ? According to the text on the page 22, the colours of the should be vice versa.

  12. Avatar  


    September 26, 2013 @ 1:20 pm

    Page nbr 18 (slide nbr 22).

  13. Avatar  

    Oliver White

    September 30, 2013 @ 2:57 pm

    Thanks Mika, looks like we had some flipped colors in there. This report is so massive that little errors like this always seem to slip by, no matter how many reviews we do. I guess it’s why coders like to pair up as well–great effect on the “quality” of their code (or document), just like this case! Good job pairing up! :-)

  14. Avatar  


    October 3, 2013 @ 1:28 pm

    This is from the document:

    “and JRebel, that time-saving tool we’ve heard
    about before, shows a statistically significant increase of 8% ”

    That you heard before..?

    This is all I have to say.

    Now I am waiting for a report on automatic gear-boxes and how they effect drivers from AUDI. ( I am sure they heard about DSG before. )

  15. Avatar  

    Jevgeni Kabanov

    October 7, 2013 @ 7:28 am

    Whatever gets the giant robots going is good with me:

  16. Avatar  

    Jevgeni Kabanov

    October 7, 2013 @ 7:31 am


  17. Avatar  


    October 7, 2013 @ 12:34 pm

    You should contact Honda then. Or maybe read an article published by Honda. Maybe they heard about some robotics. Don’t go with a paper from Audi or Mercedes. They are not into robotics. You may end up with a tool which they have heard before which improves robots by 8% which has no use to you at all.

  18. Avatar  

    Jevgeni Kabanov

    October 7, 2013 @ 12:57 pm

    You’re bringing out one paragraph from a 44 page document. And that one paragraph is based on survey data same as everything else. What’s your problem again?

  19. Avatar  


    October 7, 2013 @ 1:55 pm

    “and JRebel, that time-saving tool we’ve heard about before” is based on survey?

    How cute :)

    So it is ok to write a 1000 page report and at the end say that no other tool improves productivity as much as X, where you are related to X? “Based on survey data..” :)

    What is that you were promoting again?

  20. Avatar  

    Toomas Römer

    October 7, 2013 @ 2:00 pm

    The reason why we do productivity related research and articles is because we build productivity tools. We also have to mention our tools in there because they do save you a lot of time. If you are a Java developer then feel free to ping me and I’ll give you a personal demo if you don’t believe this.

  21. Avatar  


    October 7, 2013 @ 2:12 pm

    “and JRebel, that time-saving tool we’ve heard about before”

    Does this sentence manipulate people? It makes me think “This is a report from someone who “only” heard about it before, but really does not know about it.. And suprise,suprise… it seems it is the top product after all among the ones they are looking at!

    Why do not you just say, “and JRebel, which is powered by us..” ? or.. “and JRebel…”

    If you believe you are being ethical here, then fine.

    You can clearly see that there are no judgments / opinions on JRebel in my comments, just “ethics”. So you are really replying me on something I am not mentioning really.

    You mentioning your tool there is no problem, you “acting like it is not your tool” is not very nice. If you think you are not really acting like this, then it is probably my wrong judgment. English is not my mother-tongue after all.

    Great article..

  22. Avatar  

    Toomas Römer

    October 7, 2013 @ 2:16 pm

    I think you are reading too much into it. I’ve seen a lot of materials in ZT over there year and we don’t hide the fact that ZT, RebelLabs and JRebel are connected (all reports are a team effort from engineering teams, marketing and design) but of course for every paragraph we have a choice and I think somebody here chose this wording. I’m sure it wasn’t done to hide anything, sorry that it came out like that to you.

  23. Avatar  


    October 7, 2013 @ 7:33 pm

    In your report, you indicated that you “publish our raw data for your own analysis”. Where can I find your raw data?

  24. Avatar  

    Jevgeni Kabanov

    October 8, 2013 @ 1:18 pm

    We included a whole paragraph to explain the inclusion:

    “*Note: In order to maintain objectivity, we didn’t originally include JRebel in the list of tools. However, we couldn’t help ourselves and matched the emails provided by 47% of all respondents against our client list. We identified that one-third of respondents with email address used JRebel, two-thirds didn’t, and used their data for comparison. The results were too cool to omit :)”

    I don’t know what else should we have done.

  25. Avatar  


    October 8, 2013 @ 1:22 pm

    You should have said:

    “We identified that one-third of respondents with email address used JRebel, a tool we heard before…”


  26. Avatar  


    November 14, 2013 @ 11:11 pm

    Thanks for your post. Regarding Software Development Productivity, I think that you should check which is a tool that tracks Developer’s activity in real-time in order to improve performance through data and facts. This is something like Quantified Self but for software developers. What do you think?

  27. Avatar  


    August 8, 2014 @ 9:43 am

    Insightful report. Thanks. We use online Kanban board to reduce the delivery time and identify the areas that can be improved to increase the quality of our work.

  28. Avatar  

    K Lenc

    June 9, 2015 @ 2:06 pm

    Any opinions about ?

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.