Imagine a bacon-wrapped Ferrari. Still not better than our free technical reports.

Developer Productivity Report 2013 – How Engineering Tools & Practices Impact Software Quality & Delivery

So who cares about quality software and predictable deliveries anyway?

In truth, we all should. You might be “Developer of the Year”, but if the team around you fails to deliver quality software on time, then it pays to review your team’s practices and tools to see if that is somehow related. So that’s what we did–and guess what: the things you do and the tools you use DO have an effect on your software quality and predictability to deliver. And we’re going to find out how.

How do the practices, tools and decisions of development teams have an effect on the quality and predictability of software releases? Seems a bit abstract, right?

In fact, this is one of the most frustrating aspects of talking about the quality and predictability of software releases. We all hear a lot of noise about best practices in software engineering, but a lot of it is based on anecdotal evidence and abstractions. Do we ever see much data related to the impact of so-called best practices on teams?

With our survey, the goal was to collect data to prove or disprove the effectiveness of these best practices–including the methodologies, tools and company size & industry within the context of these practices.

Download the pdf

Our data and metrics

In the end, we collected 1006 responses, which is reasonable for a survey where all questions are *required–last year over 1800 developers did at least half of our survey on tools and technologies.

Note: It seems that getting good responses to surveys isn’t easy–most people find a 2-3 question survey palatable, but aside from just dumb numbers it’s not easy to learn much from someone in just a few seconds. We narrowed down our scope to 20 questions, which took our development team about 5 minutes to finish the one-page form. Still, we didn’t see a flood of respondent participation.

So what metrics did we decide to track in order to understand how best practices actually work?

  1. Quality of software – determined by the frequency of critical or blocker bugs discovered after release
  2. Predictability of delivery – determined by delays in releases, execution of planned requirements, and in-process changes (aka “scope creep”).

After ascertaining that Quality and Predictability where two areas in which data could be gathered, we continued with further analysis based on tools used (e.g. VCS or CI servers), practices employed (i.e. testing, measuring, reviewing) and industry & company size.

Quick note about bias: When analyzing the data, we discovered a couple of areas where bias was present. Compared to the software industry as a whole, our respondents represent a disproportionate bias towards Software/Technology companies as well as Java as a programming language.

A little history: ZeroTurnaround’s Java and Developer Productivity Reports from 2009 – Present

If you’ve been following ZeroTurnaround and RebelLabs for a while, you’ll know that this is our fourth report in as many years.

It started back in 2009, when we began our quest to understand developer productivity by looking at which Java EE application servers/containers that 1100+ developers were using and how much time drain from redeploys is associated with each one (we discovered that between 3-7 work weeks each year were lost to this process).

In 2011 we expanded our research efforts and this time asked approximately 1000 developers about Build Tools, IDEs, Frameworks and Java EE standards in addition to App Servers, and again asked about how much of each hour was lost to restarts. We also asked ~1250 developers in India about tools and productivity, and saw some interesting differences between India and the rest of the world.

By 2012, we wanted to go even further. Our Developer Productivity Report 2012 focused on the vast array of tools & technologies that developers use each day, and looked deeper into what makes developers tick, asking about developers’ work week, stress and efficiency. Releasing this report was, in many ways, the unofficial birth of RebelLabs and the idea that high-quality, educational technical content is something we should continue to focus on.

So where does that leave us for 2013 and beyond? Issuing another report on the popularity of IDEs, Build Tools, CI servers, Web Frameworks and Application Servers was one idea–people loved our 2012 report. But would learning that Vaadin jumped 1% in popularity from 2012 to 2013, or confirming that Eclipse, Subversion, Jenkins, Spring MVC and Tomcat are still #1 in their respective fields truly be of value to the Java community as a whole?

Instead, we looked to cover the more difficult areas, looking at how tools and practices affect organizations as a whole–namely with the Quality and Predictability of software releases. It’s our goal to be the premier provider of Java development productivity statistics outside of dedicated research agencies, and we’re completely transparent and honest about our data. We admit bias. We publish our raw data for your own analysis. So we set down some goals for how we would proceed.

Moving forward, let’s go to Part I, where we discuss why it’s hard to measure quality and predictability, and what we did to quantify these metrics.

Download the pdf