- Remote DBA
- Disaster Recovery
- Migration Services
- Request Info
- DBMS Courses
- Request Info
- Tech Articles
- Version Info
- Ask SCI's Experts
Oracle Rdb and Oracle CODASYL DBMS performance analysis tools share much in common. While this article was originally written for DBMS, Rdb sites will also find the information quite useful.
The CODASYL DBMS utility, DBO/SHOW STATISTICS, provides a dynamic view of the database environment. Use it to diagnose database performance problems and to determine if an application is using system resources effectively.
Performance issues discussed in this article include: IO performance, effectiveness of buffering, index statistics, database verb usage and database locking statistics. Looking at each performance issue in turn, we’ll discuss the information DBO/SHOW STATISTICS provides, and then present some alternatives for remedies.
DBO/SHOW STATISTICS collects statistics while the database is open and the application is running, showing dynamically what is happening. Generally, it makes sense to sample data over relatively long periods of time (such as a work shift). However, significant response time problems may exist in short bursts and may not be obvious when examining long-term averages. Increasing the sampling frequency during problem times might provide the information needed to isolate the specific situations causing the delays.
Some of the statistics collected tell us generally what is happening with the database; others relate to parameter settings, which can be changed to improve performance. With a well-designed database, we recommend a “steady-state” approach to analyzing statistics. By monitoring the statistics on a regular basis, the DBA becomes familiar with normal performance, notices deviations more easily and can make timely adjustments.
Help pinpoint problem areas affecting performance with DBO/SHOW STATISTICS. Performance often degrades when databases grow in size/volume. If statistics are gathered as the database grows, comparisons of differences in transaction rates can be made. The performance of system-owned sorted sets also degrades at a certain level; by collecting a history of statistics at various database sizes, you can identify the level at which degradation occurs. Let’s look at specific performance issues and see how the information displayed on the screens in DBO/SHOW STATISTICS can assist in diagnosing problems.
When an area is too full, DBMS may have to search several pages to find sufficient available space. The Records Statistics screen can provide some information about how much I/O is being done to find available space. Compare the ‘average per transaction’ for ‘records stored’ and ‘pages checked.’ If the average for ‘pages checked’ is significantly higher than for ‘records stored’, it means that DBMS is not able to store records on the target page and has to look at other pages before the store. This could mean the area is full, or that records are too clustered and that SPAM thresholds are probably not set optimally. Behind the scenes, DBMS calculates the target page based on the placement clauses in the storage schema and retrieves that page into the current buffer. If the target page has enough available space and the page is not already at the T3 threshold, the record is stored. If the page has insufficient available space or is already at T3, DBMS looks on other pages in the current buffer. This doesn’t incur additional I/O, but the count of ‘pages checked’ goes up so the ratio to ‘records stored’ will be greater than one. If the ratio is 1, then DBMS is always able to store a record on the target page, which might mean that there is too much free space and no record clustering. If you determine that an area is too full, changing the page size and allocation for the area or directing records to a new area can relieve the problem.
Effectiveness of Buffering
Sometimes much swapping of pages in and out of buffers occurs because of buffer pool overflows or lock conflicts, which occur when DBMS writes updated pages back to the database before the transaction is actually committed. This happens when a process requests an updated page in use by another process. The RUJ file records this read/write activity so that records can be reconstructed in the event of a rollback. The Summary I/O Statistics screen shows the average RUJ file reads and writes per transaction. If the number of ‘RUJ writes’ is greater than zero, then updates are occurring. If the number of ‘RUJ reads’ is greater than zero, then transactions are rolling back. If this number is large, you might investigate why the transactions are not being completed.
The Physical IO Statistics screens can give some clues about buffers. On the Writes version of the physical IO displays, look at ‘pool overflow’ and ‘lock conflicts’ in the ‘unmarked buffer’ section of the screen. If you see a relatively high number of pool overflows per transaction, you might need to increase the size of the buffer pool. The ‘lock conflict’ statistics indicate the number of times that a modified buffer was written back to the database because another process needed a page that was in the buffer pool – waiting for the first process to COMMIT the transaction and let go of the page. If there is too much swapping of pages between processes, change the database buffer pool configuration or shorten transaction durations. Keep in mind that optimizing the database buffer pool configuration for the activity of one specific area or application might throw off the efficiency of another.
These statistics give some measure of the effectiveness and volatility of the indices in the database. Performance may be adversely affected when DBMS has to spend too much time balancing the index nodes as records are added and deleted. This activity is shown on the Index Statistics screen, in the ‘balance index’ and ‘scrolling’ categories (one-way, two-way and three-way scrolling, as nodes in the B-Tree are merged and split). Use DBO/MODIFY/RESTRUCTURE to reset the B-Tree fill factor.
Database Verb Usage
Verb statistics show the mix of transactions occurring during the sampling period. The Verb Statistics screens provide information about verb successes and failures, and also show statistics for some common verbs such as COMMIT, READY and ROLLBACK. High failure rates could indicate too many deadlocks, but not necessarily, since verb failures don’t have to mean a database error – for example, the End-of-Collection (record not found) condition is counted as a verb failure.
Take a look at the “mix” of verbs to determine whether there is more updating activity (CONNECT, DISCONNECT, ERASE, MODIFY/RECONNECT, STORE/PLACE) or reading activity (FIND/FETCH/GET). And check how many verbs are executed per transaction.
Verb statistics are best used to compare against “normal” performance, and to check against other statistics. If you find more read transactions, investigate IO performance. If there are more update transactions, investigate locking statistics. Infrequent or low numbers of COMMITs could also be causing lock conflicts.
Database Locking Statistics
Too many lock conflicts might be an indication of database bottlenecks or long transactions. On the Summary Locking Statistics screen, look at statistics for ‘rqsts not queued’, ‘rqsts stalled’ and ‘rqst deadlocks.’ Relatively high numbers mean contention problems or that the database design is not optimal. The ‘blocking ASTs’ statistics indicate contention for frequently-accessed data. If you see a high number of ‘blocking ASTs’ relative to the number of ‘locks requested‘ (more than 20%), and you have Adjusted Locking Granularity enabled, you may want to try disabling ALG.
If the ‘stall time x 100’ statistics look high, investigate details about the types of locks for that particular event. From the Display_menu list, select Locking – (one stat field), then select stall time x100 to see which resources are involved with the longest stalls. Statistics for ‘quiet point lock’ are related to backups. If these numbers are high, consider adjusting or rescheduling the work load so that backup processes can run without interference. ‘Freeze locks’ refer to recovery processes that occur when a process aborts abnormally or is killed. If these statistics seem high, investigate why processes are being terminated. High rates for ‘page locks’ may indicate that the disk drives are slow, while high rates for ‘record locks’ indicate contention for data or index structures.
Lock contention can be related to transaction durations. Ideally, shorter transactions reduce database contention for records. However, shorter transactions may minimize the effectiveness of database buffering since every COMMIT and ROLLBACK requires that all buffers be flushed to disk. In high contention environments, shorter transactions are preferable since this maximizes concurrency. In low-contention environments, longer transactions are preferable so that DBMS is able to optimize buffer utilization and asynchronous writes of the buffers to disk. To reduce lock conflicts or deadlocks, optimize your database design, shorten transaction durations or disable Adjusted Locking Granularity.
summary, DBO/SHOW STATISTICS is a useful tool for helping diagnose some
performance problems. In the next issue we will talk about additional ways
you can use the tool to improve the performance of your system and
If you have a technical question about
this article or about any other How would you rate this article?
If you have a technical question about
this article or about any other
How would you rate this article?