Generally, a metric is a mapping that associates numerical values with program static or dynamic elements such as functions, variables, classes, objects, types, or threads. The numerical values may represent various resources used by the program.

For in-depth analysis of program performance, it is useful to analyze a call graph. Call graphs capture the “call” relationships between the methods. The nodes of the call graph represent the program methods, while the directed arcs represent calls made from one method to another. In a call graph, the call counts or the timing data are collected for the arcs.

Tracing

Tracing is one of two methods discussed here for collecting profile data. Java virtual machines use tracing with reduction. Here is how it works: the profile data is collected whenever the application makes a function call. The calling method and the called method (sometimes called “callee”) names are recorded along with the time spent in the call. The data is accumulated (this is “reduction”) so that consecutive calls from the same caller to the same callee increase the recorded time value. The number of calls is also recorded.

Tracing requires frequent reading of the current time (or measuring other resources consumed by the program), and can introduce large overhead. It produces accurate call counts and the call graph, but the timing data can be substantially influenced by the additional overhead.

Sampling

In sampling, the program runs at its own pace, but from time to time the profiler checks the application state more closely by temporarily interrupting the program's progress and determining which method is executing. The sampling interval is the elapsed time between two consecutive status checks. Sampling uses “wall clock time” as the basis for the sampling interval, but only collects data for the CPU-scheduled threads. The methods that consume more CPU time will be detected more frequently. With a large number of samples, the CPU times for each function are estimated quite well.

Sampling is a complementary technique to tracing. It is characterized by relatively low overhead, produces fairly accurate timing data (at least for long-running applications), but cannot produce call counts. Also, the call graph is only partial. Usually a number of less significant arcs and nodes will be missing.

See also Data Sampling Considerations.

Tuning Performance

The application tuning process consists of three major steps:

Run the application and generate profile data.

Analyze the profile data and identify any performance bottlenecks.

Modify the application to eliminate the problem.

In most cases you should check if the performance problem has been eliminated by running the application again and comparing the new profile data with the previous data. In fact, the whole process should be iterated until reasonable performance expectations are met.

To be able to compare the profile data meaningfully, you need to run the application using the same input data or load (which is called a benchmark) and in the same environment. See also Preparing a Benchmark (page 60).

Remember the 80-20 rule: in most cases 80% of the application resources are used by only 20% of the program code. Tune those parts of the code that will have a large impact on performance.

Profiling Overview

59

Page 59
Image 59
HP jmeter Software for -UX manual Tracing, Sampling, Tuning Performance