|
|
|
This chapter explores Form Services' scalability features. We researched the server's scalability by conducting a number of benchmark tests using popular hardware platforms and operating systems.
We measured these benchmarks:
We got the following results for Forms Server 6.0:
For Windows NT:
Application size/complexity | RAM per user (MB) | Users per CPU |
---|---|---|
medium/moderate |
2.5-6.0 |
100-300 |
small/simple |
1.0-2.5 |
150-300 |
For Sun Solaris:
Application size/complexity | RAM per user (MB) | Users per CPU |
---|---|---|
medium/moderate |
2.0-5.0 |
200-400 |
small/simple |
1.0-2.0 |
300-500 |
The results described in this chapter are specific to the 6i release of Form Services and should not be used for any of the previous releases of the product. The performance of this release has improved in comparison to earlier releases. This is due to a number of architectural and code optimizations, such as:
Benchmark testing is an ongoing process at Oracle Corporation. The figures presented here represent the information available at the time of writing. Additional results will be published as they become available.
Scalability is the ability to accommodate an increasing user population on a single system by adding more hardware resources to the system without changing the underlying software. A scalable system can accommodate the growing needs of an enterprise.
Choosing both hardware and software that can grow with your performance needs is a much better strategy than purchasing new software every time your performance needs change.
Consider these questions:
Answers to these questions depend heavily on the hardware, operating systems, and application software being used.
The scalability of a networked application is tied to the ability of the application server and the network topology to predictably accomodate an increase in user load.
It will be useful to understand the role of each component described in this section and how they can affect the overall scalability of a system, especially in a Form Services environment.
This chapter uses as examples the two most commonly used server hardware and operating system combinations: Sun Solaris, running on Sun UltraSparc architecture, and Microsoft Windows NT, running on Intel architecture.
These areas are important in the evaluation of a Form Services-based system:
Work faster or work smarter? Processor technology has explored both paths. Typically, a company will release new generation architectures (working smarter) every two to three years. In between those releases, it will increase the processor speed (working faster). The speed of a processor, also called clock speed, is usually represented in megahertz (MHz). Processor speed is a good indication of how fast a computer system can run. Typically, computers used as servers employ more than one processor and are called multi-processor systems.
The metric that we are really interested in with regard to the Form Services is the number of simultaneous users on each processor, sometimes called Users per Processor. This number will vary greatly for different types of processors. For an example of this variability, see Table 14-1 and Table 14-2 .
From empirical data collected in the benchmark, a computer with a 400MHz Intel Pentium II Xeon processor with 1MB of L2 cache could support approximately twice as many users as compared to a 200MHz Pentium Pro system.
Memory is the amount of RAM that a computer system has available to launch and run programs. The amount of RAM in computer systems is usually represented in megabytes (MB).
In the normal execution of a program, the program is loaded in RAM, and the operating system swaps the program to disk whenever the program is inactive. The operating system brings the program back into RAM when it becomes active.
This activity is generally called swapping. Most operating systems, such as Sun Solaris or Microsoft Windows NT, perform swapping during normal operation. Swapping places additional demands on the processor. Excessive swapping tends to slow down a system considerably. To prevent slowed performance, include enough RAM in the server host machine.
The important metric is RAM that is required for every additional user that connects and runs an application via the Form Services. This metric is also called the Memory per User. Often, performance-measuring tools do not provide an accurate measure of Memory per User. Study this metric carefully to determine memory requirements. For an example of memory per user, see Table 14-1 and Table 14-2 .
In a multi-tier, Internet-based architecture such as Form Services, the physical network that connects clients to the Form Services and the connection between the Form Services and the database are key factors in the overall scalability of the system. When you measure the performance of your Form Services based system, pay careful attention to the performance of the physical network.
The performance of an individual process in a multi-user, multi-process environment is directly proportional to the individual process' ability to be processed from main memory. That is, if required pages are swapped out to virtual memory in order to make room for other processes, performance will be impacted. One technique to increase the likelihood of finding the required page in main memory is to implement a shared memory model using Image Mapped Memory. Image Mapped Memory associates a file's contents in memory to a specific address space that is shared across processes.
Form Services uses Image Mapped Memory. Individual Forms processes share a significant portion of the FMX file image, which reduces individual memory requirements and increases overall scalability.
In a benchmark scenario, it is impractical to configure a number of client machines (and users) that accurately represents a live application environment. In benchmarks, load simulators are used to simulate real users that perform transactions on the application server. The Oracle Tools Development Organization has developed a load simulator that mimics real-world Form Services users by sending messages to the Server to simulate load. The load simulator is a small Java application that sits between the Form Services and the UI client, intercepting the message traffic that passes between these two components.
Once event messages from the client are recorded, it is possible to play them back to the server. This simulates an actual user session. (Note that the UI client is not involved in playback mode.) During playback to the server, the load simulator is capable of playing back many user sessions. In this manner, the load simulator is able to calculate the total response time for a user by determining the total round trip time for messages between client and server. By summing the Total Response Time throughout an entire business transaction, it is possible to get a measurable metric for application performance.
We performed tests against Forms applications of various complexity, from a simple single Form containing List of Values (LOVs) and pop-up windows, to complex applications containing multiple Forms and PL/SQL libraries (PLLs) open simultaneously. We tied application complexity to the number of modules that a user may be accessing at one time, rather than to the inherent complexity of any one module.
A good method for determining complexity is to look at all the dependencies appended to a Form. For example, a form may call other forms through the CALL_FORM or OPEN_FORM built-in. Additionally, it may have attached menus (MMX files) and load external business logic through the use of PL/SQL libraries (PLL files). All of these factors contribute to memory usage per user.
The following table classifies the level of complexity of Oracle Forms applications.
Application size/complexity | Total size of concurrent modules in memory |
---|---|
large/complex |
> 10MB |
medium/moderate |
2MB - 10MB |
small/simple |
< 2MB |
We tested two applications of different complexity:
To represent a realistic user community, that is, one where there is a mixed workload, the test encompassed a number of transactions that mimicked the activities performed by a Service Desk Clerk in a 45-minute scenario.
Step by step definition of the tasks implemented.
To get a feel for the decrease in performance with increasing user loads, it is first necessary to determine the time taken by a given user to perform a given application task. This Total Response Time metric differs from merely testing response time for a given physical transaction or network round trip. It looks at the total time taken (by an average user) to perform the business task at hand (that is, the sum of all interactions with the Form Services and database that takes place as part of the business transaction).
To gain some empirical information about overall system resources, the scalability testing also uses the native operating system monitoring utilities (such as Windows NT performance monitor) to determine values for both physical and virtual memory usage, and for total CPU utilization.
By using the Total Response Time metric with the empirical measurements, it was possible to determine the point at which, given an increasing user load, performance for a given user significantly degraded. Having determined the number of users that can be supported with acceptable performance, individual memory consumption becomes a simple equation of the total memory available divided by the number of users accessing the application.
For example:
On a given hardware platform with 512MB of RAM, performance is constant for up to 60 concurrent users. Then it degrades significantly. From this, we can specify that the maximum number of users supported is 60.
Allowing for a nominal operating system overhead (~32MB), individual memory usage would be (512-32) / 60 or 8MB per user.
The following sections define the systems we tested, the results of the tests, and a brief analysis for the following scenarios:
Parameters:
Application size/complexity | CPU | RAM | Operating System | Swap |
---|---|---|---|---|
medium (between 2MB and 10MB) |
2-200 MHz Pentium Pro |
512MB |
Windows NT 4.0 Server (SP 3) |
2GB |
Results:
Users per CPU | Memory per user |
---|---|
100 |
2.4MB |
Analysis:
This system is one of the cheapest systems we used to test the scalability of a medium-complexity application. The system could handle about 200 users very efficiently. Performance degraded dramatically beyond 200 users. This system is cost effective as a small departmental server for up to 200 users with applications that fall in the medium complexity class.
Parameters:
Application size/complexity | CPU | RAM | Operating System | Swap |
---|---|---|---|---|
medium (between 2MB and 10MB) |
2-400 MHz Pentium II Xeon with 1MB L2 cache |
512MB |
Windows NT 4.0 Server (SP 3) |
2GB |
Results:
Users per CPU | Memory per user |
---|---|
200 |
1.2MB |
Analysis:
This system is one of the newest Intel Pentium II Xeon based servers we used to test the scalability of a medium complexity application. The system handled about 400 users very efficiently. Performance degraded dramatically beyond 400 users. The system is cost-effective as a large departmental server or as an entry-level Enterprise Server for small to medium businesses.
Parameters:
Application size/complexity | CPU | RAM | Operating System | Swap |
---|---|---|---|---|
medium |
2-248 MHz Ultra Sparc |
512MB |
Solaris 2.5.1 |
2GB |
Results:
Users per CPU | Memory per user |
---|---|
200 |
1.3MB |
Analysis:
The system handled about 375 users very efficiently. Performance degraded dramatically beyond 375 users. The system seemed to slow down due to excessive paging and swapping activity, which indicates that the real bottleneck was physical memory. This system is cost-effective for larger departments or small to medium businesses running medium-complexity applications.
Parameters:
Application size/complexity | CPU | RAM | Operating System | Swap |
---|---|---|---|---|
small (less than 2MB) |
2-400 MHz Pentium II Xeon with 1MB L2 cache |
512MB |
Windows NT Server 4.0 (SP 3) |
2GB |
Results:
Users per CPU | Memory per user |
---|---|
250 |
1MB |
Analysis:
The Pentium II Xeon based server handled about 500 users very efficiently with a small application.
Parameters:
Application size/complexity | CPU | RAM | Operating System | Swap |
---|---|---|---|---|
small (less than 2MB) |
2-248 MHz Ultra Sparc |
512MB |
Solaris 2.5.1 |
2GB |
Results:
Users per CPU | Memory per user |
---|---|
240 |
1MB |
Analysis:
This system is an entry-level Sun Ultra Sparc System. The system handled about 480 users very efficiently. Performance degraded dramatically beyond 480 users.
|
Copyright © 2000 Oracle Corporation. All Rights Reserved. |
|