There are two important rules to remember when modifying programs to improve performance. These might seem obvious, but in practice they are often forgotten.

Don't put performance above correctness.

When you modify the code, and especially when you change some of the algorithms, always take care to preserve program correctness. After you change the code, you'll want to test its performance. Do not forget to test its correctness. You don't have to perform thorough testing after each minor change, but it is certainly a good idea to do this after you're done with the tuning.

Measure your progress.

Try to keep track of the performance improvements you make. Some of the code changes, although small, can cause great improvements. On the other hand, extensive changes that seemed very promising may yield only minor improvements, or improvements that are offset by a degradation in another part of the application. Remember that good performance is only one of the factors determining software quality. Some changes (for example, inlining) may not be worth it if they compromise code readability or flexibility.

Preparing a Benchmark

To test the performance of your application, you need to run it on some input data. Preparing good data for performance testing can be a difficult task. For many server-type applications this may involve setting up several machines, and creating an “artificial” load. Constructing the benchmark itself is probably the single most time-consuming step in application tuning, but when done correctly, it can be worth the effort.

Generally, you want the benchmark to resemble as much as possible the real load that the application will face when finally deployed.

There are several mistakes that are often made at this stage. To avoid the most common ones, remember the following:

Make the benchmark repeatable; this is key to the systematic approach to performance tuning.

Do not create unrealistic situations that could speed up or slow down your application considerably. For example:

Do not direct all the traffic to just a few records in the database, which could cause exceptionally good performance because of caching, or exceptionally bad performance because of lock contention.

Do not run all clients from one machine. This might not exercise your application properly because of the performance bottleneck on the client side.

Do not use too much invalid data; it may never get to the components of the application that do the real work.

Do not use a different environment (like the JVM, or options) for the performance testing than the one to be used for the real deployment.

Size your benchmark appropriately:

If the benchmark runs only briefly, the profile data will contain too much “startup noise” from the effects caused by class loading, dynamic libraries loading, bytecode compilation and optimization, and the application initializations (database connections, threads, and data structures).

If the benchmark runs for too long, it will be very time consuming to repeat the run, and you'll be tempted to make several changes to your application between the consecutive profiling runs, which is not recommended.

As your application grows in terms of performance, you may need to scale up your benchmark as well.

60 Profiling Applications