The latest expert opinions, articles, and guides for the Java professional.
In this section we’ll look at how the complexity or size of the application changes how performance testing is done. We’ll split applications into three categories, which we’ll mention henceforth as simple applications (applications with fewer than 10 screens), medium complexity applications (applications with 10 to 99 screens) and complex applications (applications with 100 or more screens). When it comes to tooling, we notice the general trend that as applications become more complex, more tools tend to be used. A couple of exceptions include the NetBeans profiler and Java Mission Control. The usage of Java Mission Control actually decreases as applications become larger and more complex.
Complexity vs. Root Causes
How about the root causes of problems? Do they change when application size/complexity increases? Largely no, but there are a few exceptions shown in Figure 2.8. We can see that HTTP sessions and database queries are affected by the size of an application. As an application gets larger or more complex, HTTP sessions also grow. I guess this one is pretty self-explanatory as there will potentially be more places to get and store data from. Database access problems also increase as the application complexity increases, which might just be to say the larger applications are more likely to use a database. However, databases are more or less a fixture and fitting these days to most web applications. There may be more chance of a database growing to a size where it requires further indexing, or perhaps has a poor design which requires multiple queries to be run for the same data. Alternatively, there might be a higher chance of an n+1 style problem being compounded by a single request which ultimately bubbles up to the user as a noticeable problem.
We’ve talked about the complexity of the application using the number of screens as a metric, but what about the application type? There were four types we asked about, batch, webapp, mobile and desktops – but does the type affect the tools that are used? Well across the selection there were two tools in particular that differed across application types, namely JProfiler and YourKit, as shown in figure 2.9. JProfiler seemed very popular with web applications, mobile and desktop applications, but seemingly not a favorable choice for batch applications. However YourKit has a really strong presence in batch applications. In fact those who stated batch was their application type were more than 3 times more likely to use YourKit than those who have an application type of mobile. Batch teams also are less likely to test without tools. Three times less likely than teams with a web application in fact! So, batch application folks, take a bow, you’re our performance testing heroes!
When are different applications tested?
Batch applications are tested in production approximately 30% less than other applications. Instead, batch performance testing is more likely to occur from the CI stage all the way through to staging, more than any other application type. Once again Batch performance testers showed their awesomeness by saying they don’t test at all half as often as all the other application types. Hey mom, when I grow up, I wanna be a batch performance tester! Desktop application types are the most likely to be tested while they’re being coded – by over 25%, so a lot can be said for testing early on the desktop.
What gets measured?
Mobile applications measure network latency more than any of the other application types – over three times as much as batch applications and twice as much as desktop applications. Batch testers care more about thread concurrency and contention than others by around 30%. They also care a lot about application code performance as they are 20% more likely to be monitored than web or desktop applications and 50% more likely to be monitored than mobile applications. So, batch applications, you totally take this round! Let’s move on to the next part to find out what our respondents consider to be best practices in the world of performance testing.
No comments yet.
Leave a comment