Performance Terms
1-2 Oracle HTTP Server powered by Apache Performance Guide
Performance Terms
Following are performance terms used in this book:
What is Performance Tuning?
Performance must be built in. You must anticipate performance requirements
during application analysis and design, and balance the costs and benefits of
optimal performance (see "Setting Performance Targets" on page1-7). This section
introduces some fundamental concepts:
Response Time
System Throughput
concurrency The ability to handle multiple requests simultaneously.
Threads and processes are examples of concurrency
mechanisms.
latency The time that one system component spends waiting for
another component in order to complete the entire task.
Latency can be defined as wasted time. In networking
discussions, latency is defined as the travel time of a
packet from source to destination.
response time The time between the submission of a request and the
completion of the response.
scalability The ability of a system to provide throughput in
proportion to, and limited only by, available hardware
resources.
A scalable system is one that can handle increasing
numbers of requests without adversely affecting response
time and throughput.
service time The time between the initiation and completion of the
response to a request.
think time The time the user is not engaged in actual use of the
processor.
throughput The number of requests processed per unit of time.
wait time The time between the submission of the request and
initiation of the response.