|
|
|
This chapter details suggestions for improving performance of your applications. It includes the following sections:
The following table summarizes the available performance suggestions, and indicates where a detailed explanation can be found in this chapter.
Before getting into the details, you should read the introductory information in Section 3.2. The material on performance measurement in Section 3.3 may also be helpful.
The suggestions are grouped according to their scope, in this sequence:
Before setting out to improve performance, it's helpful to have a clear view of specific goals, and what is involved in achieving them.
You need to be precise about what areas you want to improve, and how performance in those areas is perceived or measured.
Additionally, improving performance can involve understanding the many interrelationships and dependencies in today's computing environment, the costs involved, and the trade-offs that may occur in improving performance in one area.
The use of Forms Developer and Reports Developer is divided into design time and runtime. Design time, when programmers are building the applications, is not usually a concern in terms of performance. It is runtime -- when the applications are being exercised by multiple end users in the daily business environment -- that is almost always the main concern. As a result, the rest of this chapter focuses on performance in the runtime environment.
There are many ways to view an application's performance. Its storage requirements, its coding efficiency, its network loading, and its server usage, are just a few areas. Every situation is different, and every site and department will have its own set of priorities and its own view of which performance area is most important. In addition, "good" and "bad" performance in these areas are relative things. There are rarely any absolute standards.
Often the most visible area is performance in terms of response time for end users (that is, how long people using the application must wait after making a choice or entry). Here, too, there are no absolute standards. No matter what the actual response time, users will have an opinion -- which will depend in part on what they are accustomed to and what their expectations are. Real numbers are irrelevant. If end users are not happy with the response time, then that area is certainly a candidate for improvement.
Applications do not run in a vacuum. In a client/server environment, the application is dependent on two underlying hardware and operating system configurations, plus a hardware and software network connection. In a three-tier environment, the situation is even more complex. In addition, the application is interacting with one or more database servers, and may also be calling other software components as it runs.
Further, an application is often sharing these hardware and software resources with other applications. Because of this sharing, an application that is efficient in itself can be adversely affected by other inefficient applications.
Performance of an application, then, is not just a result of its own design and usages, but a very complex result of the combined interactions of a great number of different components and factors.
Some improvements in performance are straightforward and purely beneficial. Eliminating useless code in an application would be an example.
Other improvements might not be so clear cut, however. For example, giving one application a higher network priority by necessity lowers the relative priority of the others. As another example, we might restructure a database to improve access time for one type of application, but find that we have actually degraded access time for other important applications.
At the single application level, a classic trade-off is space versus speed. We might be able to decrease our main storage requirements by off-loading some components, but that would most likely degrade the application's response time (since those components would need to be loaded when needed). On the other hand, we might move the loading operations into the start-up phase, which would improve later response time, but at the cost of higher initial start-up overhead.
Before deciding on any particular improvement effort, it's helpful to understand the broader implications, and make choices according to your priorities.
How do we tell if our applications are performing adequately?
In the case of response time, the opinion of our end users is paramount. But in other areas we would like more tangible data; some hard numbers.
The Ora_Prof built-in package is distributed with both Forms Developer and Reports Developer. It allows you to examine the PL/SQL in an application, and find out how much time a specific piece of code takes to run.
The following product-specific measurement tools are also available.
You can obtain general information about a Forms application by setting the runtime option STATISTICS=YES
.
You can use Form Builder's Performance Event Collection Services (PECS) to gather more detailed information about an application's runtime behavior.
You activate the PECS data collection by specifying the runtime option PECS=ON
.
The simplest use of PECS is to collect application-wide statistics. This does not require any changes to the existing application (only the activation of PECS via the runtime option).
PECS also allows you to focus on specific areas in your application. PECS provides you with a number of built-ins that you can insert into your code to identify sections or events or classes that you want to examine in detail.
Once the data has been collected, you can use the PECS Assistant to view and analyze it. The Assistant produces various types of reports that let you see such things as elapsed time, CPU time, events or occurrences reached, usage of your PL/SQL code, and so forth. Your analysis of the application's runtime behavior can help you spot potential areas for improvement. For example, some section of code might be taking considerably longer to execute than the others, and would therefore be a candidate for closer investigation.
Report Builder offers two measurement tools: the Reports profile option and the Reports trace option.
The Reports profile option, when set, produces a log file that shows where the report spent its processing time. This may help you identify performance bottlenecks.
To set the profile option, specify PROFILE=<filename>
, where <filename>
is the name of the required log file. Profile can be either a report parameter or a command line argument.
Typical profile output from a sample report is shown below:
Total Elapsed Time: |
29.00 seconds |
Reports Time: |
24.00 seconds (82.75% of TOTAL) |
Oracle Time: |
5.00 seconds (17.24% of TOTAL) |
UPI: |
1.00 seconds |
SQL: |
4.00 seconds |
From this profile, it is possible to see the execution time (total elapsed time) for the report, the amount of time that was spent formatting the retrieved data (Reports Time), and the amount of time spent waiting for the data to be retrieved (Oracle Time). UPI time is the time spent establishing the database connection, and parsing and executing the SQL. The SQL time is the time spent while the database server fetches the data, and time spent executing SRW.DO_SQL()
statements (the DML and DDL statements that your application may contain).
In this example, the profile shows that the majority of the time was spent laying out the data rather than querying and fetching.
The Reports trace option produces a file that describes the series of steps that a report carries out during the execution of the report. The trace option can be set so that all events are logged in the file, or only a subset of steps are logged (for example, only SQL execution steps). The trace file provides an abundance of information that is not only useful for performance tuning, but also in finding out what executed when.
The trace option can be set either from the main menu (choosing Trace under Tools) or from the command line arguments TRACEFILE
(filename for trace information), TRACEMODE
(either append trace information from future runs to the existing trace file, or replace the trace file), or TRACEOPTS
(a list of the event types where tracing is required).
Database servers and network systems often provide measurement and analysis tools that you can use to obtain performance information in those areas.
For example, an invaluable aid to tuning your SQL is the SQL trace functionality provided by the Oracle database server. SQL trace enables you to see the SQL sent to the database, as well as the time taken to parse, execute, and fetch data from the statement. Once a trace file has been generated, use the TKPROF utility to generate an Explain Plan, which is a map of the execution plan used by the Oracle Optimizer. The Explain Plan shows, for example, where full-table scans have been used, which may suggest that the application could benefit from an index (depending on the performance hit). More information about the Explain Plan is available in the Oracle SQL Language Reference Manual.
As well as measurement and analysis tools offered by the servers and network systems your application uses, you should also consult with the administrators of those areas. They may be able to offer direct assistance, or suggestions for ways to improve application performance in the existing environment.
The following performance-improvement guidelines apply to Forms Developer and Reports Developer in general (all their component Builders), and to both deployment architectures (client/server and three-tier).
The general guidelines cover these areas:
Perhaps the simplest way to obtain improved performance is to upgrade your hardware and/or software. While there is effort involved in upgrading, the performance improvements offered by the newer components often make it worthwhile.
Each successive release of Oracle software offers improvements and extensions over its predecessors. Improvements are in many categories, and vary in nature from release to release. But often a new release will offer not only new functionality, but also something in the way of performance enhancements -- perhaps additional tuning aids or even automatic performance improvement.
For example, Release 1.6 improves on Release 1.5 by providing a load balancer that can make efficient use of multiple application servers. Release 2 offers the returned-table-of-records feature, which allows passing changes once to a stored procedure that in turn distributes them to multiple tables, saving network trips. As yet another example, Release 6 contains re-written internal code that uses the more efficient OCI language to interface with the database server, providing improvements without any required customer action.
Consider upgrading to a later, more efficient release.
Both Forms Developer and Reports Developer, of course, run with other software, most notably the Oracle database server and the PL/SQL language components. Better performance in associated components will often be reflected in better performance in your applications. More tuning features in those areas may offer more opportunity for making improvements in the applications.
The later releases of the associated software almost always offer better performance. For example, the Oracle8 database server offers a number of performance improvements over Oracle7: for example, table and index partitioning, enhanced parallel processing, and deferred constraint checking.
Therefore, to the extent you are able to control or influence the choice of database server and other associated software, consider upgrading to a higher, more efficient level.
Increasing the capacities and/or speeds of the underlying hardware systems is an obvious approach to improving performance. This includes not only the desktop and server machines, but also the network connections between them.
Accessing a database is a major activity of typical Forms Developer and Reports Developer applications. Being efficient in reading and writing that data can have a significant effect on overall performance.
Both Forms Developer and Reports Developer are able to take advantage of the Oracle database server's array processing capabilities. This allows records to be fetched from the database in batches instead of one at a time, and results in significantly fewer calls to the database. The downside of array processing is that more space is required on the execution platform for storing the arrays of records returned.
If load on the network becomes a major bottleneck in the production environment, then set the Developer product's runtime ARRAYSIZE
parameter to as large a value as possible for the execution environment.
Ideally, an application should have no redundant queries (queries which return data which is not required), since they will clearly diminish performance. However, situations can arise where an application not only needs to produce a different format for different users, but also needs to utilize different query statements. Clearly this could be achieved by developing two different applications, but it may be desirable to have a single application for easier maintenance.
For example, in a report, you could disable redundant queries by use of the SRW.SET_MAXROW()
procedure. The following code in the Before Report trigger will disable either Query_Emp or Query_Dept, depending on a user parameter:
IF :Parameter_1 = `A' then
SRW.SET_MAXROW(`Query_Emp',0);
ELSE
SRW.SET_MAXROW(`Query_Dept',0);
END IF;
There are several points to remember when using SRW.SET_MAXROW()
:
SRW.SET_MAXROW()
is in the Before Report trigger (after the query has been parsed). If SRW.SET_MAXROW()
is called after this point, then the SRW.MAXROW_UNSET
packaged exception is raised.
If an application is known to be spending an inordinate amount of time in the database, then it is often beneficial to review the structure of the data and how it is being used. Both Forms Developer and Reports Developer are non-procedural tools that are optimized for set-based logic, and a bad schema design can have a dramatic negative effect. For example, an overly normalized data model can result in many avoidable joins or queries, while a lack of appropriate indexes can result in many costly full-table scans.
The specific nature of your application will determine the most efficient data model. A query-driven application can benefit from de-normalized tables; normalized tables are usually best for applications that do many updates and inserts.
Efficient design and operation of the database and server will clearly benefit its client applications. However, because the creation and management of the database is a large topic, and one usually outside the domain of the application developers, the topic of database performance is only introduced here. The subject is covered in its own manual: Oracle Server Tuning. Using that manual and such server tools as SQL trace and the TKPROF utility, you can determine where your data model could be improved.
Even if that area is outside your direct control, you might still want to consult with the database server personnel to see if specific performance concerns could be addressed to mutual advantage.
Both Forms Developer and Reports Developer use SQL to talk to the database and retrieve data, and it is helpful for anyone tuning applications to have a good working knowledge of SQL and to understand how the database is going to execute these statements.
Inefficient SQL in your application can severely impact its performance. This is particularly true in applications that have large queries.
The Oracle database server provides you with two SQL optimizers: cost-based and rule-based. Using the cost-based optimizer gives you a significant amount of automatic optimization without having to involve yourself in the complexities of tuning your SQL ; in addition, hints are provided that allow for additional tuning. Using the rule-based (heuristic) optimizer allows you to fine-tune your SQL to potentially achieve an even higher level of optimization, although it does require more work on your part and some understanding of SQL processing.
For most applications, the cost-based optimizer will give a satisfactory level of optimization. Indeed, an untuned cost-based optimization is often superior to a hand-tuned rule-based optimization. However, a developer who understands the spread of the data and the rules governing the optimizer, and who wants to attempt to achieve the highest level of efficiency, can try using the rule-based method.
In any event, it is important to choose one or the other optimizer. Either:
ANALYZE
on the tables or setting the init.ora
parameter), or
When performing calculations within an application, the general rule of thumb is that the more calculations that can be performed within the query SQL the better. When calculations are included in the SQL, they are performed by the database before the data is returned, rather than the data being returned and cached before the calculation is performed by the application. From Oracle 7.1 onwards, you can include server-stored user-defined PL/SQL function calls in the query select list. This is more efficient then using a local PL/SQL function (e.g., in a formula column), since the calculated data is returned as part of the result set from the database, so no further calculations are required.
In Oracle8, calls to methods can use the SELF parameter, which simplifies and speeds the passing of arguments.
Declaring and using explicit cursors in your application gives you complete control over your database queries. However, such cursors are rarely necessary. (Both Forms Developer and Reports Developer create any needed cursors implicitly, and manage them for you.) Explicit cursors also add to network traffic, and therefore should be avoided in most applications.
Group filters are available in the Reports and Graphics components.
The main use for group filters is to restrict the number of records being retrieved to be the first or last n records, although there is also an option to create a PL/SQL filter condition. When using a group filter of either type, the query is still passed to the database and all data will still be returned to the application, where the filtering will take place. Therefore, even if the application displays only the top five records, the result set returned will contain all the records returned by the query.
For this reason, it is usually more efficient to try to incorporate the group filter into the where clause of the query wherever possible. This will restrict the data returned by the database.
Both Forms Developer and Reports Developer offer the ability to construct applications that use multiple components. For example, data might be fetched and manipulated by Forms, which then calls Reports to produce some output. Reports, too, could call Graphics to display some output visually.
While each called component could re-query the data, it is more efficient to have Forms create a record group to hold the data, and then pass that along as a parameter to Reports, and Reports similarly to Graphics. (This technique is sometimes referred to as query partitioning.) Using this technique, the data is queried only once.
When one component calls another, and the called component is not already in memory, there is a certain amount time taken to load it. While this load-upon-demand makes more efficient use of memory, it can cause a perceptible wait for the end user.
It is possible to reduce the wait time by having the called component loaded initially (along with the calling component). This does lengthen start-up time (as well as use memory less-efficiently), but a wait at start-up is usually less noticeable than a wait in the middle of processing.
(This technique is more useful for Forms calling Reports than it is for Reports calling Graphics.)
All the general suggestions offered earlier in this chapter also apply to Forms applications. In addition, consider the following.
The general value and trade-offs of array processing have already been noted. In Forms, this setting for querying is controlled in the block property Query Array Size. For updating/inserting/deleting, the array processing setting is controlled in the block property DML Array Size.
If you can, base your data block on a stored procedure.
Stored procedures are the most direct way of moving processing to the server. When correctly designed, stored procedures can also eliminate many network round trips. For example, by basing a query on a stored procedure, the foreign key lookups and calculations can be performed on the server rather than in Post-Query triggers. Such triggers typically add at least one round trip per row, thereby losing the benefit of array fetches.
Similarly, by performing updates through a stored procedure, audit trails or denormalized data can be written without an additional network round trip; so can validations that might be necessary before attempting to perform the DML. This eliminates network round trips that previously might have occurred in Pre-Update, Pre-Insert, and Pre-Delete triggers.
If you are using a release prior to 2.0, you can manually build a data block on a stored procedure by writing transactional triggers, such as On-Select and On-Fetch Using Release 2.0 or later, you can perform array fetches through stored procedures.
You also have two options for queries through stored procedures. The first option is to base a data block's query on a stored procedure that returns a Ref Cursor; the other is a stored procedure that returns a Table of Records.
A Ref Cursor is a PL/SQL construct that allows the stored procedure to open a cursor, and return to the client a "pointer" or reference to the cursor. The client can then fetch records from the cursor just as if the client had opened the cursor itself. In the case of Forms, records are fetched through the Ref Cursor using array fetches exactly as if Forms had opened the cursor itself for a data block based directly on a table or view.
A data block based on a Ref Cursor has many similarities to a data block based on a view, but there are two major advantages to a Ref Cursor. First, a stored procedure provides better encapsulation of the data. By denying direct query access to the tables, you can ensure that applications query the data only in ways that are meaningful (for example, a set of tables might be designed to be joined in a specific way to produce a particular set of information) or only in ways that are efficient (for example, queried in such a way that the indexes can be used).
The second advantage is that the stored procedure can be more flexible. The procedure can determine at runtime which one of several Select statements to execute in opening the cursor. This decision might depend on the role or authority of the user. For example, a manager might see all of the columns in the Emp table, but a clerk would be shown blanks for the salary. Or it might depend on a parameter so that a different set of data - historical versus current, for instance - can be displayed in a single data block. This decision can be as complex as you wish, providing you can write the PL/SQL. The only limitations are that all of the different Select statements must return a compatible set of columns and the Select statement cannot be composed dynamically at run time. (The database doesn't yet support Ref Cursors with dynamic SQL).
Note: Use of the REF cursor prevents use of Query-by-Example.
Introduced with PL/SQL release 2.3, a Table of Records is an in-memory structure similar to a database table. The stored procedure can build an in-memory table consisting of, quite literally, any data at all you can construct, row by row, much like an array. Whereas a Ref Cursor allows you to return anything that you know how to construct in SQL, a Table of Records allows you to return anything that you know how to construct in PL/SQL. Not only can you perform lookups and calculations on the server side, you can also make complex decisions about which records to include or exclude from the returned record set.
One example of something relatively easy to do in PL/SQL and very hard to do in SQL would be to return the employees in each department whose salary is in the top 5 salaries for their department. (What makes this hard in SQL is that several people could have the equal fifth high salary. In PL/SQL, it's a relatively simple loop.)
When called in response to a Forms query, the procedure builds the Table of Records on the server side. It then returns the whole result set to the client at once, using as few physical network round trips as the network packet size allows. Each record in the Table becomes a row in the Forms block. This frees up server resources and uses the network bandwidth very efficiently, at the cost of client resources and potentially wasting network traffic for unneeded records.
Note that when used for a master/detail query, the Table of Records technique will return all the detail records on the query. Thus it is suited only for smaller queries.
In summary then, although a Table of Records allows the procedure the greatest flexibility in determining the result set, it should be used with care.
Note: Use of the REF cursor prevents use of Query-by-Example.
In Release 2.0, you can also use a Table of Records returned to a stored procedure to perform inserts, updates and deletes from a block. The stored procedure can then "fan out" your changes to as many tables as necessary, potentially saving many network round trips if your data model is highly normalized. Writing audit trails is another possible use. This technique requires that you provide a procedure for each of Insert, Update and Delete.
As with a block based on a regular table, Forms automatically maintains the state of each record to determine if it is an inserted, updated or deleted record. At commit time, Forms constructs a Table of Records for all of the inserted records, another for the updated records and another for deleted records. It then calls each of the procedures, passing it the relevant Table of Records. As with query, the Table of Records is passed in a "single shot." In this case, though, there is no disadvantage to sending the whole Table at once, since all the records have to be sent to the server to be committed anyway.
Finally, it is worth noting that you might combine these other techniques in any way. For example, you might choose to query through a Ref Cursor while performing DML through a Table of Records, giving you the best of both worlds.
By default, Forms assigns a separate database cursor for each SQL statement that a form executes implicitly or as part of posting or querying data. This behavior enhances processing, because the statements in each cursor need to be parsed only the first time they are executed in a Runform session -- not every time.
Forms does allow you to save some memory by having a single cursor for all implicit SQL statements (other than query SELECTs). You would do this by setting the runtime option OptimizeTP
to No.) However, the memory savings are usually insignificant, and when you do this, processing is slowed because all Insert, Update, Delete, and Select for Update statements must be parsed every time they are executed.
Therefore, it is recommended that you avoid using the OptimizeTP=NO
setting.
By default, Forms assigns a separate database cursor for each SQL statement that a form executes explicitly in a trigger. This behavior enhances processing, because the statements in each cursor need to be parsed only the first time they are executed in a Runform session -- not every time.
Forms also allows you to save some memory by having a single cursor for all SQL statements in triggers. (You would do this by setting the runtime option OptimizeSQL
to No.) However, the memory savings are usually insignificant, and when you do this, processing is slowed because the SQL statements must be parsed every time they are executed.
Therefore, it is recommended that you avoid using the OptimizeSQL=NO
setting.
It is often more efficient to divide a large application into multiple small forms, and then navigate between the various forms as needed.
You can reduce navigation time if you keep a frequently-used form open after its initial use, rather than opening and closing it each time it is used. Closing and re-opening a form involves considerable overhead, which slows performance.
To keep forms open, navigate by using the OPEN_FORM built-in instead of the NEW_FORM built-in. (NEW_FORM closes the previously-used form when opening the new one.)
A larger fetch size reduces the number of fetches required to obtain a record group. If your application is using record groups (or constructing record groups at runtime), set this size using the Record Group Fetch Size property.
If you are using the Oracle8 server, it is more efficient to use the LOB (large object) datatypes instead of LONG or LONG RAW.
Each global variable takes 255 bytes. If an application is creating a large number of these variables, consider erasing them when they are no longer required. (Use the Erase built-in for that purpose.)
The following suggestions may improve resource usage for very large forms running on Microsoft Windows. (These suggestions differ from standard design practice, and should only be used in those cases where resource usage is a problem.)
Whenever a user updates a record, Forms locks the record, and a round-trip to the database takes place.
If only a single user is updating the data, the locking is not necessary. To turn off locking, set the Form property Isolation Mode to Serializable; set the block property Locking Mode to Delayed.
However, the suppression of Forms' locking should be done only if you are quite certain that there can never be simultaneous use of the data.
All the general suggestions offered earlier in this chapter also apply to Reports. In addition, consider the following:
Once the data has been retrieved from the database, Reports needs to format the output following the layout model that the user has created. The time taken to generate the layout is dependent on a number of factors, but it is mostly devoted to preventing an object from being overwritten by another object, and performing any calculations or functions in the format triggers. Greater efficiency in these two areas will have the greatest payoff.
When generating a default layout, Reports puts a frame around virtually every object to protect it from being overwritten when the report is run. At runtime, every layout object (frames, fields, boilerplate, etc.) is examined to determine the likelihood of that object being overwritten. In some situations (for example, boilerplate text column headings), there is clearly no risk of the objects being overwritten, and hence you can remove the immediately surrounding frame. This reduces the number of objects that Reports has to format, and hence improves performance.
Similarly, when an object is defined as having an undefined size (variable, expanding or contracting in either or both the horizontal and vertical directions) then extra processing is required, because Reports must determine that instance of the object's size before formatting that object and those around it. Where feasible, set this sizing to fixed, which will eliminate this additional processing, since the size and positional relationships between the objects is already known.
It is generally preferable to use declarative format commands rather than format triggers. However, format triggers are useful for making runtime changes. Specifically:
Care should always be exercised when using format triggers, being aware that the trigger does not only fire for every instance of its associated object on the output media, but every time the object is formatted at runtime.
These two purposes noted above may seem like the same thing, but consider the following example. A tabular report includes a single repeating frame that can expand vertically and has page protect set on. As this report is formatted, there is room for one more line at the bottom of the first page. Reports starts to format the next instance of the repeating frame and fires its associated format trigger. One of the objects inside the repeating frame is found to have expanded, and this instance of the repeating frame is therefore moved to the following page, and the format trigger for the repeating frame is fired again. Hence, although the repeating frame only appears once (at the top of the second page), the format trigger has fired twice. Had this format trigger contained an INSERT statement, then two rows of the same data would have been inserted.
Format triggers should also be placed at the highest level possible in the object/frame hierarchy, so that the trigger fires at the lowest possible frequency. For example, if there are four fields in a frame, a format trigger at the field level will fire four times, whereas a format trigger at the frame level will need to fire only once.
If PL/SQL must be used in a format trigger, place it in the trigger of the object with the lowest frequency possible. For example, PL/SQL in the format trigger of a frame instead of a field typically makes the report run faster. The PL/SQL in a format trigger is executed for each instance of its object. The lower the frequency of the object, the fewer times the PL/SQL will be executed and the faster the report will run.
Because you cannot be sure how many times a format trigger will fire for a particular object, you should not perform calculations or use DML in a format trigger.
If the display attributes of a field are to change dynamically (for example, to draw attention to values outside the norm), then all attribute changes can be set in a single call to SRW.ATTR()
, or in multiple calls to SRW.SET
built-ins with each call setting a separate attribute. Although the latter technique makes for more readable code, runtime efficiency is usually better with a single SRT.ATTR()
call -- especially if many attributes need to be set.
As with most operations, there are a number of ways to create data models that include more than one table. Consider, for example, the standard case of the department-employee join; i.e., the requirement is to list all the employees in each department in the company. In Reports Developer, the programmer can either create a single query, or two queries and use a master-detail relationship between the two.
On the application side, when designing the data model it is preferable to minimize the actual number of queries by using fewer, and larger (multi-table) queries rather than more, and simpler (single-table) queries. Each time a query is issued, Reports Developer needs to parse, bind and execute a cursor. A single query is therefore able to return all the required data in a single cursor rather than many. Also be aware with master-detail queries that the detail query will be re-parsed, re-bound and re-executed for each master record retrieved. In this instance it is often more efficient in a report to merge the two queries and use break groups to create the master-detail effect.
It should be noted, however, that the larger and more complex a query becomes, the more difficult it can be to maintain. Each site needs to decide how to balance its performance and maintenance requirements.
Having designed a report to run efficiently, you can further improve the overall performance of a report by setting specific runtime arguments.
The ARAYSIZE
and RUNDEBUG
settings have been discussed previously.
If a parameter form or on-line previewing of the report is not required, then you can bypass these functions by setting the PARAMFORM
and BATCH
system parameters appropriately.
When your application is in regular, production use, make sure it is not running in debug mode.
In debug mode, the runtime products will gather information and perform other internal operations. Once your application is debugged, these operations are no longer needed and detract from performance.
In Reports, debug mode is controlled through the runtime parameter RUNDEBUG
; this should be set to NO.
Give layout objects (e.g., frames and repeating frames) a transparent border and fill pattern.
Transparent objects do not need to be rendered in a PostScript file. As a result, processing is faster when objects are transparent.
Make your non-graphical layout objects (e.g., boilerplate text or fields with text) fixed in size -- that is, Vertical and Horizontal Elasticity of Fixed. In particular, making repeating frames and their contents fixed in size can improve performance.
Non-graphical objects that are variable in size require more processing because Report Builder must determine their size before formatting them. Non-graphical objects that are fixed in size do not require this additional processing because their size is already known.
Make your graphical layout objects (e.g., images and Oracle Graphics objects) variable in size -- that is, Vertical and Horizontal Elasticity of Variable.
Graphical objects that are fixed in size usually need to have their contents scaled to fit inside of the object. Scaling an object's contents requires more processing. If the object is variable in size, it can grow or shrink with the contents, and scaling is not necessary.
Specify Reduce Image Resolution for image objects whose size you reduce. (This option is available as a drawing option under the Format menu.)
When you reduce the size of an image, it requires less information to display it than when it was larger. Reduce Image Resolution eliminates the unnecessary information and reduces the amount of space needed to store the image. This can be particularly useful for large, multi-colored images.
Make fields that contain text one line long and ensure that their contents fit within their specified width (e.g., by using the SUBSTR
function).
If a field with text spans more than one line, then Report Builder must use its word-wrapping algorithm to format the field. Ensuring that a field only takes one line to format avoids the additional processing of the word-wrapping algorithm.
Minimize the use of different formatting attributes (e.g., fonts) within the same field or boilerplate text.
If text in a field or boilerplate object contains numerous different formatting attributes, it requires longer to format.
Ensure that the break order property is set for as few columns in the break group as possible (break order is indicated by a small triangle to the left of the column name in the group). Each break group requires at least one column within in to have break order set.
If sorting is necessary for a break group, use an ORDER BY clause in its SQL. This will cause the rows to be returned already sorted by break order, and improve performance by reducing the amount of sorting that must be done on the client.
For each column that has break order set, Reports places an extra column into the appropriate query's ORDER BY clause. The fewer columns in the ORDER BY, the less work the database server has to do before returning the data. The creation of a break group may make the ORDER BY clause defined in the query redundant. If this is the case, then the redundant ORDER BY should be removed, since this will require extra processing on the database.
Break order columns should be as small as possible, and should also be database columns (as opposed to summary or formula columns) wherever this is feasible. Both of these conditions can help the local caching that Reports does before the data is formatted to be as efficient as possible. Clearly, these conditions can not always be met easily, but are worth considering all the same.
If a Graphics Builder display referenced by a report uses some or all of the same data as the report, pass the data from the report to the display. You can specify that data be passed in the Property Palette for the display in Report Builder.
If the report and the display use the same data, passing the data reduces the amount of fetching that needs to be done. If you do not pass the data from the report to the display, the data is actually fetched twice: once for the report and once for the display.
Depending upon the circumstances, PL/SQL or a user exit may perform better. Following are the items you should consider when deciding between PL/SQL and user exits:
PL/SQL is highly recommended, because it allows the code to be procedural and still portable. PL/SQL also offers performance advantages when referencing report-level objects. User exits will perform better, but they require linking and are not portable. It makes sense to include a user exit if the action required will be executed for every record in a very large report or performs numerous calculations. A PL/SQL procedure makes more sense if the action is only for each group or at the report level. Furthermore, not everything can be done in PL/SQL; some actions like controlling external devices have to be done in a C program.
Note: If there is no performance improvement or other good reason to use a user exit, you should use PL/SQL because it is easier and portable.
Use PL/SQL for DML, unless you want to pass parameters to your DML statements.
SRW.DO_SQL()
should be used as sparingly as possible, because each call to SRW.DO_SQL()
necessitates parsing and binding the command and opening a new cursor (just as with a normal query). Unlike the query, however, this operation will occur once each time the object owning the SRW.DO_SQL()
fires. For example, if a PL/SQL function calls SRW.DO_SQL()
, and the group where the function resides returns 100 records, then the parse/bind/create cursor operation will occur 100 times. It is therefore advisable to only use SRW.DO_SQL()
for operations that cannot be performed within normal SQL (for example, to create a temporary table, or any other form of DDL), and to use it in places where it will be executed as few times as possible (for example, in triggers that are only fired once per report).
Writing DML statements in PL/SQL is faster than an SRW.DO_SQL
call containing the same statement. The reason to use SRW.DO_SQL
for DML statements is that it can concatenate bind parameters to construct the DML statement. For example, you can have SRW.DO_SQL
create a table whose name is determined by a parameter entered on the runtime parameter form:
SRW.DO_SQL (`CREATE TABLE' || :tname || `(ACOUNT NUMBER NOT NULL PRIMARY KEY, COMP NUMBER (10,2))');
Usage Notes: You can also use the dbms_sql package that comes with Oracle 7.1 or later for DML. Refer to your Oracle database server documentation for more information.
Your PL/SQL code can be local (in the Program Units node of your report in the Object Navigator) or stored externally in a PL/SQL library on the server.
Depending on conditions, local PL/SQL might execute more quickly than a reference to a procedure or function in an external PL/SQL library. However, even if you determine that local PL/SQL would run faster under your conditions, you should still weigh that benefit against the loss of the benefits of the library method (e.g., sharing the code across many applications).
Minimize the number of calls to SRW.SET_ATTR
setattr>referenc by specifying multiple attributes in one call. You can specify multiple attributes per call to SRW.SET_ATTR instead of making a separate call for each attribute.
Rationale: The fewer calls you make to SRW.SET_ATTR
, the faster the PL/SQL will run.
The value of array processing has been noted earlier.
For Report Builder's ARRAYSIZE
executable argument (e.g., ARRAYSIZE=10
), enter as large a value as you can. Note that the array size is measured in kilobytes, not rows. ARRAYSIZE
means that Report Builder can use that number of kilobytes of memory per query in executing your report. Report Builder uses Oracle's array processing, which fetches multiple records in batches, instead of one record at a time. As a result, you can control the amount of data to be fetched by the batch processes.
For Report Builder's LONGCHUNK
executable argument (e.g., LONGCHUNK=10
), enter as large a value as you can. Refer to the Oracle installation information for your operating system for the recommended amount for your machine. LONGCHUNK
determines the size of the increments in which Report Builder will retrieve a LONG value. The LONGCHUNK
size is measured in kilobytes.
By increasing LONGCHUNK
as much as possible, you can reduce the number of increments it takes Report Builder to retrieve LONG values.
When printing to PostScript, specify COPIES=1
.
If COPIES
is set to something greater than 1 for a PostScript report, Report Builder must save the pages in temporary storage in order to collate them. This can significantly increase the amount of temporary disk space used by Report Builder, and the additional writing to files can slow performance.
Report Builder provides you with the ability to display data such as total number of pages, or grand totals in the report margins or on the header pages. This is an extremely useful function, but has the requirement that the entire report must be processed before the first page can be displayed.
Avoiding "fetch-ahead" operations when designing a report for the Previewer or Live Previewer will help speed the display of the first page of the report.
The following items can result in fetching ahead when referenced before the data on which they rely:
When you use a total number of pages field source, Report Builder must save all of the pages in temporary storage in order to determine the total number of pages. This can significantly increase the amount of temporary disk space used by Report Builder, and the additional writing to files can slow performance.
Cross-product groups also cause fetching ahead. In order to cross-tabulate the data in a cross-product group, Report Builder must first fetch all of the data. It should be noted that these items are not really performance "problems." They slow down the Previewer or Live Previewer, but they do not affect performance when writing to a file or some other destination.
Note: A column can cause fetching ahead even if it is not displayed. For example, a grand total may not appear in the report output, but, since it is in the report, fetching ahead may still occur when Report Builder calculates it.
Store documents in files to enhance performance. Store documents in reports and in the database for security. If you open a report from or save a report to a database, some Report Builder tables will be put into memory. As a result, you need to ensure that you have enough resources to cache the tables.
For documents, writing to and reading from files tends to be much faster than the database.
Exception: If you must use a busy network or slow machine to access the files, you may not realize performance gains from storing in files.
Specifying path variables may be of some help in speeding up file searching and creation/access of temporary files. (Report Builder provides two environment variables, REPORTSnn_PATH
and REPORTSnn_TMP
, that govern where to search for files and where to store temporary files. The nn is the Report Builder release level.) For REPORTSnn_PATH
, specify the path in which files referenced by the report are located. For REPORTSnn_TMP
, specify a path that has sufficient free space for temporary files and is on a device with fast response time (e.g., a RAM disk).
REPORTSnn_PATH
is the default path in which Report Builder will search for files (e.g., link file boilerplate). By specifying the path in which the files referenced by the report are located, you can reduce the amount of searching Report Builder needs to perform to retrieve the files. (By using REPORTSnn_PATH
instead of hard-coding the paths in the report definition, you also maintain the portability of your report.) REPORTSnn_TMP
is the path in which Report Builder will create its temporary files.
If using the server, set the SOURCEDIR=
parameter on the server-name.ora
file. That directory will be searched before using the REPORTSnn
path.
The Multi-Tiered Reports Server is a feature designed to efficiently handle large-scale production reports -- reports that are not practical to run on a desktop machine.
With this feature, you can run multiple large reports simultaneously on a robust server machine more appropriate for the task. The server can invoke multiple Reports engines if desired, thus further maximizing efficiency. In addition, report output can be cached on the server, where it can be available to multiple Reports users in the network (so the report need be generated only once).
The general suggestions offered earlier in this chapter also apply to Graphics applications. In addition, consider the following:
Start-up time for an application that uses graphics will be faster if the OGD graphics files have been pre-loaded. If it is uncertain which specific files will be needed at runtime, a dummy OGD can be created and pre-loaded.
Understand and control the damage update flag -- which is one of the arguments to most Graphics PL/SQL built-ins. If you allow the damage flag to default, it will be set to TRUE, which means that redrawing will occur every time the Graphics display list is modified. Such redrawing may not always be necessary.
Performance is improved if PL/SQL program units (including button procedures, triggers, and so forth) update the display only once. Don't include updates in loops if not necessary.
If the Graphics application is called by Forms or Reports, try to design the applications to share as many elements as possible. For example, when charting data already fetched by Forms, pass the same data to the display in record groups (instead of having the display re-query the database).
If all data is being shared and the Graphics application has no need to call the database server, set the LOGON
parameter to NO when the Graphics application is invoked. (If LOGON
is not set to NO, Graphics will reconnect to the server, slowing down its initiation.)
Also, use the same color palette and same fonts in your form or report and in your display. In addition, keep the same coordinate system, if possible.
The DO_SQL procedure is useful for executing DDL statements. However, do not use this procedure to execute DML statements. In general, DML statements are executed more efficiently within program units than with the DO_SQL procedure.
When you use a built-in subprogram to perform an operation on a Graphics object, you need to identify the object. If you are going to reference an object multiple times in PL/SQL, it is more efficient to assign a handle (that is, a pointer) to the object and identify the object by its handle, rather than to identify the object by its name. Providing the handle reduces internal search time.
Graphics provides a series of built-in subprograms that simplify the process of creating objects and getting or setting their attributes. Using these built-ins in place of the attribute record approach reduces development time, and makes program units easier to read and understand.
However, using these built-ins has an adverse effect on runtime performance. Each call to a built-in requires Graphics to define and populate a new internal attribute record. It also takes longer to execute multiple set routines than to execute just one. In addition, using these built-ins requires your application to rely on default settings.
As a rough guideline, if you need to set three or more attributes, it is more efficient to use attribute masks or create a library of your own shortcuts with pre-defined defaults.
In the traditional client/server structure, the application runs on the client, and the database and its software reside on the server. All of the general suggestions offered earlier in this chapter are applicable in a client/server set-up. In addition, consider the following client/server-specific suggestions:
Both Forms Developer and Reports Developer give you an install-time choice in where their software will reside. Each configuration has assets and drawbacks. Choose the one best suited for your situation.
After you have created an application, you have the choice of storing it on the client or on the server. Storing applications on the server allows shared access to them, and also saves disk space on the clients. On the other hand, applications stored locally on the clients allow faster access.
In addition to the space and sharing considerations, storing on the server may offer the additional advantage of superior security.
Given these considerations, choose the residence best suited to your situation.
In a three-tier structure, Tier 1 is the runtime user's desktop machine. It runs a Java applet, which loads part of the Forms runtime product, known as the Forms client portion. Tier 2 is an application server, which runs the remaining portion of the Forms runtime product, known as the Forms server portion. Tier 3 is the database server. Communication takes place between the Forms client and Forms server, and also between the Forms server and the database server.
The general performance suggestions offered earlier in this chapter also apply here in the three-tier world. For example, the interaction between the application server component and the database server is essentially the same as that between the application server and the database in the two-tiered client/server environment. Therefore, areas such as improved use of PL/SQL and more efficient use of the database are equally relevant here.
With a three-tier environment, obviously there is communication not just between the application server and the database (Tiers 2 and 3), but also between the client on the desktop machine and the application server (Tiers 1 and 2). Therefore, the reduction of network usage becomes an even more important area on which to focus.
The suggestions below are those that are specific to the three-tier environment.
The interactions between the client on the Tier 1 desktop machine and the server on the application server machine become more significant as the number of end users increases. The following suggestions will help you maximize your application's scalability. (These suggestions apply to any Forms Developer or Reports Developer application.)
The network connection between Tiers 1 and 2 is often heavily used in the three-tier environment, and therefore network efficiency is an important area for performance. Increasing the bandwidth here can lead to significant improvements.
Changes in the user interface during execution require interactions between the Tier 1 and Tier 2 machines. Such changes slow down performance (as experienced by the end user).
You can speed up execution by avoiding the following types of runtime activities:
As a general principle, you should limit activities that involve frequent screen refreshing. For example, avoid the use of short-interval visual timers or clocks. (Timers with intervals longer than one minute are usually not a problem.) Design your user interfaces so that events are initiated by user interaction rather than elapsed clock time.
If your application uses stacked canvases, set their Visible property to No, and set their Raise on Entry property to No. This will minimize runtime interface changes.
Try to perform validation at a higher level. Application design and scalability decisions often involve a trade-off; for example, field-level validation will generate significantly more network traffic than block-level validation, but will be more interactive for users.
Enabling and disabling menu items programmatically can reduce performance in Webforms.
If your application uses graphics, limiting the size of the display files will help performance. To help keep the display size small, you can, for example:
If your application uses graphics (JPG files), use the environment variables FORMSnn_MAPPING
and FORMSnn_PATH
to identify their URL location.
Use multimedia only if it is important for the user interface. Where you do use it, define (or redefine) button triggers to make a call to a URL that contains media information.
Running animations over a network is extremely costly. If such elements are required, look at using animated graphic files that are client-side based.
Take advantage of custom hyperlinks to create hyperlink drill-downs. With this technique, code is not loaded to the user machine unless it is actually needed.
Put as much code as possible into libraries to maximize code sharing between objects and applications and to minimize file size during loading.
Beginning with Release 2.0, libraries are shared across multiple forms. This means that program units don't have to be re-loaded and unpacked with each form. It also means that less memory is used, because there is only a single copy of the program unit in memory.
When an application begins running on the desktop machine, it requires the availability of a number of Java class files. In the typical application, there may be a considerable number of these files, and downloading them from the server adds to start-up overhead.
Beginning with Release 6.0, some of these Java class files are packaged as JAR files. The JAR files can then be stored on the desktop machine instead of on the application server, so application start-up is faster.
You can also place the remaining class files required for your application into JAR files on the desktop machine. This can be done using the Oracle Java Developer Kit.
In some situations where fast user interaction is desired, it may be advantageous to pre-load the application; that is, to start it before the actual intensive usage will be needed. In this way, the initial loading phase will already have been completed, and the subsequent invocations will be faster.
When the application is invoked from the desktop, the user can choose to have it downloaded in an uncompiled state, and compiled on the desktop as it begins running. This option may produce faster overall invocation time.
The suggestions for Tier 2 - Tier 3 interaction (interaction between the application server and database server) are the same as for the client/server environment discussed earlier in this chapter. For example, you can use the DML Array Size property, use data blocks based on stored procedures, and so forth. All those earlier suggestions for database interaction apply here as well.
Increasing the power of the underlying hardware anywhere in the three-tier system will almost certainly have a positive effect on performance.
Some recent test results suggest that the most significant improvements can be obtained by upgrading the power of the Tier 2 processor. However, each site and situation is unique, and these results may not be universally applicable.
In a three-tier structure, it is possible to have multiple versions of the Tier 2 component. You can employ several intermediate server machines, each running a copy of the Forms or Reports Server component.
You use the Oracle Application Server to coordinate processing. Requests from the client (Tier 1) machines come to the Oracle Application Server, which passes them to one of the Tier 2 servers.
With multiple Tier 2 servers operating and sharing the work load, performance on that tier can be improved.
|
Copyright © 2000 Oracle Corporation. All Rights Reserved. |
|