- Remote DBA
- Disaster Recovery
- Migration Services
- Request Info
- DBMS Courses
- Request Info
- Tech Articles
- Version Info
- Ask SCI's Experts
DBMS is often criticized for excessive locking – resulting in poor database performance when sharing data among multiple concurrent processes. Is this criticism justified, or is DBMS being unfairly blamed for application design and implementation shortfalls? To evaluate this question, we need to understand more about DBMS locking protocols. In this article, we examine how, why, what and when DBMS locks and unlocks database resources. Future articles will address how to minimize the impact of database locking.
THE NEED FOR LOCKING
In an ideal concurrent environment, many processes can simultaneously access data in a DBMS database, each having the appearance that they have exclusive access to the database. In practice, this environment is closely approximated by careful use of locking protocols.
Locking is necessary in a concurrent environment to assure that one process does not retrieve or update a record that is being updated by another process. Failure to use some controls (locking), would result in inconsistent and corrupt data.
In addition to record locking, DBMS implements several other locking mechanisms to ensure the integrity of other data structures that provide shared I/O, communication among different processes in a cluster and automatic recovery in the event of a process or cluster failure. While these other lock structures use additional VMS lock resources, they rarely hinder database concurrency, but can actually improve database performance.
How DBMS Uses Locks
DBMS makes extensive use of the VMS Distributed Lock Manager for controlling virtually every aspect of database access. Use of the Distributed Lock Manager ensures cluster-wide control of database resources, thus allowing DBMS to take advantage of OpenVMS' clustering technology.
VMS locks consume system resources. A typical process, running a DBMS application may lock hundreds or thousands of records and database pages at a time. Using a VMS lock for each of these resources in a busy database could easily exhaust these resources. The system parameters: LOCKIDTBL, LOCKIDTBL_MAX, and REHASHTBL parameters determine the number of locks that exist on the system at any one time.
To minimize the number of VMS locks required to maintain record and page integrity, DBMS implements a technique called adjustable locking granularity. This allows DBMS to manage a group of resources (pages or records) using a single VMS lock. When a conflicting request is made for the same resource group, the process that is holding the lock is notified that it is blocking another process and automatically reduces the locking-level of the larger group.
Adjustable page locking is mandatory and hidden from the database administrator, while adjustable record locking can be enabled and tuned or disabled for each database. When adjustable record locking is enabled, DBMS attempts to minimize the number of VMS locks required to maintain database integrity without impacting database concurrency.
Types of Locks
DBMS employs many types of locks to ensure database integrity in a concurrent environment. By using various lock types for different functions, DBMS can provide optimal performance in many different environments.
DBMS uses area locks to implement the DML (Data Manipulation Language) READY statement. If a realm is readied by another run unit, later READY usage modes by other run-units must be compatible with all existing READY usage modes.
Area locks can significantly affect database concurrency – however, their impact is only felt during a DML READY statement. Lock conflicts for area locks occur only when you attempt to READY a realm. Once you successfully READY a realm, concurrent locking protocols (if required) are handled at the page and record level. Table I displays compatible area ready modes.
TABLE I – AREA READY MODE COMPATIBILITY TABLE
Page locks are used to manage the integrity of the page buffer pool. DBMS automatically resolves page lock conflicts by using the blocking AST features of the VMS lock manager. Thus, page locks are not typically a major impediment to database concurrency unless long-DML verbs are frequently executed in your environment. DBMS utilizes adjustable locking to minimize the number of VMS locks required to maintain consistency of the buffer pool. A high level of blocking ASTs is an indication that there is a lot of contention for database pages in the buffer pool. Reducing the buffer length may help to reduce the overhead of page level blocking ASTs.
Record locks are typically the largest source of lock conflicts in a DBMS environment. Record locks are used to manage the integrity of your data, and to implement the "adjustable record locking granularity" feature of DBMS. Adjustable locking is the default for record locks, but can be tuned or disabled by the DBA.
Quiet Point Locks
Quiet point locks are used to control online database and afterimage journal backup operations. Large quiet point lock stall times indicate that processes are waiting for online backups to begin, or for the primary after-image journal file to be written to secondary storage. To minimize the effects (duration) of quiet point locks, it is important that all concurrent database processes (except for batch retrieval transactions) periodically execute commits (or commit retaining). Even "concurrent retrieval" transactions should periodically "commit [retaining]" their transactions. This ensures that the online backups will achieve a "quiet point" quickly and allow new transactions to proceed.
Freeze locks are used to stop (freeze) database activity during database process recovery. When a process terminates abnormally (as a result of a process or node failure, STOP/ID, or a CTRL-Y/STOP), all locks held by that process are automatically released. If transactions were allowed to continue, database corruption would result. Thus, when a process terminates abnormally, DBMS uses the freeze lock to stop database activity until the failed process(es) can be recovered. Freeze locks typically are not a major source of contention in most environments. However, if you are subject to frequent system or process failures, or users are using CTRL-Y/STOP to exit from programs, freeze locks could hinder database concurrency.
Several of the DBMS creation and modification qualifiers have a direct impact on database locking characteristics. Establishing the appropriate mix of qualifiers in your environment can help minimize the impact of database locking.
The [no]hold_retrieval_locks qualifier, determines whether DBMS holds read-only record locks on all records read for the duration of the transaction (until the next COMMIT [without the RETAINING option] or ROLLBACK). Holding retrieval locks guarantees that any records previously read during a transaction will not have been changed by another run-unit during the same transaction. While this increases the consistency of your transaction, it can significantly degrade concurrency. This option should only be used if your transactions read very few records and consistency of all records read must be guaranteed throughout the transaction. By default, DBMS uses /NOHOLD_RETRIEVAL_LOCKS. The logical name, DBM$BIND_HOLD_RETRIEVAL_LOCKS may be used to override the default established in the root file. If DBM$BIND_HOLD_RETRIEVAL_LOCKS translates to "1" then all records read by the transaction are locked until the end of the transaction. Software Concepts International recommends against using hold retrieval locks in most environments.
The [no]wait_record_locks qualifier determines whether a run-unit waits when requesting a record that is locked in a conflicting mode by another run-unit or if it receives a "lock conflict" exception. This qualifier only determines if the requesting run-unit will receive a "lock conflict" exception – not a "deadlock" exception (deadlock exceptions are always returned when they occur). When the default (WAIT_RECORD_LOCKS) is used, DBMS will not generate a "lock conflict" exception, and the blocked process will continue to wait until the record is unlocked. Thus, the process can continue to wait indefinitely until the record is unlocked by the other run-unit.
The logical name, DBM$BIND_WAIT_RECORD_LOCKS may be used to override the default established in the root file. Again, a value of "1" enables wait on record lock conflicts, and a value of "0" causes the process to receive the "lock conflict" exception. Software Concepts International recommends clients to WAIT on record conflicts. This allows the application to trap for "deadlocks," and avoids "live-lock" situations that cannot be detected. In addition, the wait on record conflicts can be used with the /TIMEOUT to give the application control over records locked for an excessive duration.
The timeout qualifier allows you to specify the amount of time that a run-unit waits for a locked record before returning a "lock timeout" exception. This qualifier must be used with the "wait" on record locks (above). The logical name, DBM$BIND_LOCK_TIMEOUT_INTERVAL may be used to override the default established in the root file. The value of the translation determines the number of seconds to wait for a locked record. If your applications trap the ‘DBM$TIMEOUT’ exceptions, then Software Concepts International recommends using lock timeouts with a time of at least 60 seconds. Using the /TIMEOUT qualifier only if your application is designed to handle "lock timeout" exceptions. COBOL shops that use declaratives, may want to handle "DBM$_DEADLOCK", "DBM$LCKCNFLCT", and "DBM$TIMEOUT" exceptions in the same "USE" section.
Enabling, disabling, or modifying the values of the adjustable locking features of DBMS will not significantly reduce record lock conflicts. However, adjustable locking can significantly affect the amount of lock resources your application uses, as well as the overall overhead associated with record locking.
The DBO/SHOW STATISTICS (record locking) screen provides useful insights into the potential benefits and costs of adjustable locking. If you observe a blocking AST rate that is more than 20-25% of the number of locks requested plus locks promoted, then this may indicate significant adjustable locking overhead. In this case, try disabling adjustable locking, or reducing the number of levels in its tree.
Lock optimization sounds so obvious. Who wouldn't want "lock optimization?" Lock optimization (the default) only controls whether area locks are held from one transaction to another. This avoids the overhead of acquiring and releasing locks for each transaction.
In environments where long DML verbs are frequently executed, lock optimization may actually degrade performance. This is because the process holding the lock does not release the NOWAIT lock until the end of its current DML verb. Thus, if the current DML verb takes a long time to complete, the process trying to ready the realm may experience a long delay.
Snapshots are included in this discussion of locking, because the use of snapshots (batch retrieval transactions) can significantly reduce the level of lock contention in your database. Although snapshot transactions are subject to page and other resource lock conflicts, they are never involved in record lock conflicts – thus providing significantly increased concurrency between read-only and update transactions.
Enabling snapshots are not however a panacea – All update processes (except EXCLUSIVE or BATCH) must write before-images of their updates to the snapshot files. The use of /DEFERRED qualifier minimizes this affect by allowing update processes to write to the snapshot file only when snapshot transactions are active.
Additional or excessive buffers require additional page level locking to manage the buffer pool. If you are using large buffer counts, you may need to increase the enque limits on your processes, as well as the SYSGEN parameters, LOCKIDTBL, LOCKIDTBL_MAX and REHASHTBL.
DBMS Lock Exceptions
DBMS signals one of three types of exceptions when a process encounters a locked record – a deadlock, a lock conflict or a lock timeout.
A deadlock exception, DBM$_DEADLOCK, is returned when two run-units attempt to access a resource in mutually exclusive modes, and each run-unit is waiting for a resource that the other run-unit holds. This indicates that neither run-unit can continue unless one of the run-units releases its locks. When a deadlock occurs, DBMS will choose a "victim," and signal that run-unit of the deadlock condition. This does not cause the "victim" to automatically release its locks. The victim process should immediately execute a 'rollback' to release its locks.
Lock Conflict Exceptions
DBMS will only return the lock conflict exception, DBM$_LCKCNFLCT, when the run-unit is bound to a database with "/NOWAIT_RECORD_LOCKS" enabled and it attempts to access a record that is locked in a mutually exclusive mode by another run-unit. Note, that only the "blocked" run-unit receives the exception.
Lock Timeout Exceptions
The third type of exception is the lock timeout exception, DBM$TIMEOUT. A lock timeout only occurs when the "/TIMEOUT=LOCK=nnn" and "/NOWAIT_RECORD_LOCKS" are enabled and a run-unit attempts to access a record that is locked in a mutually exclusive mode by another run-unit.
Handling Lock Conflicts
In DBMS Locking, Part II, we will discuss options for handling and minimizing DBMS lock conflicts.