Monitoring
1. Monitoring Database Activity
1.1. Standard Unix Tools
On most Unix platforms, IvorySQL modifies its command title as reported by ps, so that individual server processes can readily be identified. A sample display is
$ ps auxww | grep ^postgres
postgres 15551 0.0 0.1 57536 7132 pts/0 S 18:02 0:00 postgres -i
postgres 15554 0.0 0.0 57536 1184 ? Ss 18:02 0:00 postgres: background writer
postgres 15555 0.0 0.0 57536 916 ? Ss 18:02 0:00 postgres: checkpointer
postgres 15556 0.0 0.0 57536 916 ? Ss 18:02 0:00 postgres: walwriter
postgres 15557 0.0 0.0 58504 2244 ? Ss 18:02 0:00 postgres: autovacuum launcher
postgres 15558 0.0 0.0 17512 1068 ? Ss 18:02 0:00 postgres: stats collector
postgres 15582 0.0 0.0 58772 3080 ? Ss 18:04 0:00 postgres: joe runbug 127.0.0.1 idle
postgres 15606 0.0 0.0 58772 3052 ? Ss 18:07 0:00 postgres: tgl regression [local] SELECT waiting
postgres 15610 0.0 0.0 58772 3056 ? Ss 18:07 0:00 postgres: tgl regression [local] idle in transaction
(The appropriate invocation of ps varies across different platforms, as do the details of what is shown. This example is from a recent Linux system.) The first process listed here is the primary server process. The command arguments shown for it are the same ones used when it was launched. The next four processes are background worker processes automatically launched by the primary process. (The “autovacuum launcher” process will not be present if you have set the system not to run autovacuum.) Each of the remaining processes is a server process handling one client connection. Each such process sets its command line display in the form
postgres: user database host activity
The user, database, and (client) host items remain the same for the life of the client connection, but the activity indicator changes. The activity can be idle (i.e., waiting for a client command), idle in transaction (waiting for client inside a BEGIN block), or a command type name such as SELECT. Also, waiting is appended if the server process is presently waiting on a lock held by another session. In the above example we can infer that process 15606 is waiting for process 15610 to complete its transaction and thereby release some lock. (Process 15610 must be the blocker, because there is no other active session. In more complicated cases it would be necessary to look into the pg_locks system view to determine who is blocking whom.)
If cluster_name has been configured the cluster name will also be shown in ps output:
$ psql -c 'SHOW cluster_name'
cluster_name
--------------
server1
(1 row)
$ ps aux|grep server1
postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: server1: background writer
...
If you have turned off update_process_title then the activity indicator is not updated; the process title is set only once when a new process is launched. On some platforms this saves a measurable amount of per-command overhead; on others it’s insignificant.
1.2. The Cumulative Statistics System
IvorySQL’s cumulative statistics system supports collection and reporting of information about server activity. Presently, accesses to tables and indexes in both disk-block and individual-row terms are counted. The total number of rows in each table, and information about vacuum and analyze actions for each table are also counted. If enabled, calls to user-defined functions and the total time spent in each one are counted as well.
IvorySQL also supports reporting dynamic information about exactly what is going on in the system right now, such as the exact command currently being executed by other server processes, and which other connections exist in the system. This facility is independent of the cumulative statistics system.
1.2.1. Statistics Collection Configuration
Since collection of statistics adds some overhead to query execution, the system can be configured to collect or not collect information. This is controlled by configuration parameters that are normally set in postgresql.conf.
The parameter track_activities enables monitoring of the current command being executed by any server process.
The parameter track_counts controls whether cumulative statistics are collected about table and index accesses.
The parameter track_functions enables tracking of usage of user-defined functions.
The parameter track_io_timing enables monitoring of block read and write times.
The parameter track_wal_io_timing enables monitoring of WAL write times.
Normally these parameters are set in postgresql.conf so that they apply to all server processes, but it is possible to turn them on or off in individual sessions using the SET command. (To prevent ordinary users from hiding their activity from the administrator, only superusers are allowed to change these parameters with SET.)
Cumulative statistics are collected in shared memory. Every IvorySQL process collects statistics locally, then updates the shared data at appropriate intervals. When a server, including a physical replica, shuts down cleanly, a permanent copy of the statistics data is stored in the pg_stat subdirectory, so that statistics can be retained across server restarts. In contrast, when starting from an unclean shutdown (e.g., after an immediate shutdown, a server crash, starting from a base backup, and point-in-time recovery), all statistics counters are reset.
1.2.2. Viewing Statistics
Several predefined views, listed in Table 1 , are available to show the current state of the system. There are also several other views, listed in Table 2 , available to show the accumulated statistics. Alternatively, one can build custom views using the underlying cumulative statistics functions.
When using the cumulative statistics views and functions to monitor collected data, it is important to realize that the information does not update instantaneously. Each individual server process flushes out accumulated statistics to shared memory just before going idle, but not more frequently than once per PGSTAT_MIN_INTERVAL milliseconds (1 second unless altered while building the server); so a query or transaction still in progress does not affect the displayed totals and the displayed information lags behind actual activity. However, current-query information collected by track_activities is always up-to-date.
Another important point is that when a server process is asked to display any of the accumulated statistics, accessed values are cached until the end of its current transaction in the default configuration. So the statistics will show static information as long as you continue the current transaction. Similarly, information about the current queries of all sessions is collected when any such information is first requested within a transaction, and the same information will be displayed throughout the transaction. This is a feature, not a bug, because it allows you to perform several queries on the statistics and correlate the results without worrying that the numbers are changing underneath you. When analyzing statistics interactively, or with expensive queries, the time delta between accesses to individual statistics can lead to significant skew in the cached statistics. To minimize skew, stats_fetch_consistency can be set to snapshot, at the price of increased memory usage for caching not-needed statistics data. Conversely, if it’s known that statistics are only accessed once, caching accessed statistics is unnecessary and can be avoided by setting stats_fetch_consistency to none. You can invoke pg_stat_clear_snapshot() to discard the current transaction’s statistics snapshot or cached values (if any). The next use of statistical information will (when in snapshot mode) cause a new snapshot to be built or (when in cache mode) accessed statistics to be cached.
A transaction can also see its own statistics (not yet flushed out to the shared memory statistics) in the views pg_stat_xact_all_tables, pg_stat_xact_sys_tables, pg_stat_xact_user_tables, and pg_stat_xact_user_functions. These numbers do not act as stated above; instead they update continuously throughout the transaction.
Some of the information in the dynamic statistics views shown in Table 1 is security restricted. Ordinary users can only see all the information about their own sessions (sessions belonging to a role that they are a member of). In rows about other sessions, many columns will be null. Note, however, that the existence of a session and its general properties such as its sessions user and database are visible to all users. Superusers and roles with privileges of built-in role pg_read_all_stats can see all the information about all sessions.
View Name |
Description |
|
One row per server process, showing information related to the current activity of that process, such as state and current query. |
|
One row per WAL sender process, showing statistics about replication to that sender’s connected standby server. |
|
Only one row, showing statistics about the WAL receiver from that receiver’s connected server. |
|
Only one row, showing statistics about blocks prefetched during recovery. |
|
At least one row per subscription, showing information about the subscription workers. |
|
One row per connection (regular and replication), showing information about SSL used on this connection. |
|
One row per connection (regular and replication), showing information about GSSAPI authentication and encryption used on this connection. |
|
One row for each backend (including autovacuum worker processes) running |
|
One row for each backend running |
|
One row for each backend (including autovacuum worker processes) running |
|
One row for each backend running |
|
One row for each WAL sender process streaming a base backup, showing current progress. |
|
One row for each backend running |
View Name |
Description |
|
One row only, showing statistics about the WAL archiver process’s activity. See |
|
One row only, showing statistics about the background writer process’s activity. See |
|
One row only, showing statistics about WAL activity. See |
|
One row per database, showing database-wide statistics. See |
|
One row per database, showing database-wide statistics about query cancels due to conflict with recovery on standby servers. See |
|
One row for each table in the current database, showing statistics about accesses to that specific table. See |
|
Same as |
|
Same as |
|
Similar to |
|
Same as |
|
Same as |
|
One row for each index in the current database, showing statistics about accesses to that specific index. See |
|
Same as |
|
Same as |
|
One row for each table in the current database, showing statistics about I/O on that specific table. See |
|
Same as |
|
Same as |
|
One row for each index in the current database, showing statistics about I/O on that specific index. See |
|
Same as |
|
Same as |
|
One row for each sequence in the current database, showing statistics about I/O on that specific sequence. See |
|
Same as |
|
Same as |
|
One row for each tracked function, showing statistics about executions of that function. See |
|
Similar to |
|
One row per SLRU, showing statistics of operations. See |
|
One row per replication slot, showing statistics about the replication slot’s usage. See |
|
One row per subscription, showing statistics about errors. See |
The per-index statistics are particularly useful to determine which indexes are being used and how effective they are.
The pg_statio_ views are primarily useful to determine the effectiveness of the buffer cache. When the number of actual disk reads is much smaller than the number of buffer hits, then the cache is satisfying most read requests without invoking a kernel call. However, these statistics do not give the entire story: due to the way in which IvorySQL handles disk I/O, data that is not in the IvorySQL buffer cache might still reside in the kernel’s I/O cache, and might therefore still be fetched without requiring a physical read. Users interested in obtaining more detailed information on IvorySQL I/O behavior are advised to use the IvorySQL statistics views in combination with operating system utilities that allow insight into the kernel’s handling of I/O.
1.2.3. pg_stat_activity
The pg_stat_activity view will have one row per server process, showing information related to the current activity of that process.
Column |
Type |
Description |
|
|
|
OID of the database this backend is connected to |
|
|
|
Name of the database this backend is connected to |
|
|
|
Process ID of this backend |
|
|
|
Process ID of the parallel group leader, if this process is a parallel query worker. |
|
|
|
OID of the user logged into this backend |
|
|
|
Name of the user logged into this backend |
|
|
|
Name of the application that is connected to this backend |
|
|
|
IP address of the client connected to this backend. If this field is null, it indicates either that the client is connected via a Unix socket on the server machine or that this is an internal process such as autovacuum. |
|
|
|
Host name of the connected client, as reported by a reverse DNS lookup of |
|
|
|
TCP port number that the client is using for communication with this backend, or |
|
|
|
Time when this process was started. For client backends, this is the time the client connected to the server. |
|
|
|
Time when this process' current transaction was started, or null if no transaction is active. If the current query is the first of its transaction, this column is equal to the |
|
|
|
Time when the currently active query was started, or if |
|
|
|
Time when the |
|
|
|
The type of event for which the backend is waiting, if any; otherwise NULL. |
|
|
|
Wait event name if backend is currently waiting, otherwise NULL. |
|
|
|
Current overall state of this backend. Possible values are: |
|
|
|
Top-level transaction identifier of this backend, if any. |
|
|
|
The current backend’s |
|
|
|
Identifier of this backend’s most recent query. If |
|
|
|
Text of this backend’s most recent query. If |
|
|
|
Type of current backend. Possible types are |
Wait Event Type |
Description |
|
The server process is idle. This event type indicates a process waiting for activity in its main processing loop. |
|
The server process is waiting for exclusive access to a data buffer. Buffer pin waits can be protracted if another process holds an open cursor that last read data from the buffer in question. |
|
The server process is waiting for activity on a socket connected to a user application. Thus, the server expects something to happen that is independent of its internal processes. |
|
The server process is waiting for some condition defined by an extension module. |
|
The server process is waiting for an I/O operation to complete. |
|
The server process is waiting for some interaction with another server process. |
|
The server process is waiting for a heavyweight lock. Heavyweight locks, also known as lock manager locks or simply locks, primarily protect SQL-visible objects such as tables. However, they are also used to ensure mutual exclusion for certain internal operations such as relation extension. |
|
The server process is waiting for a lightweight lock. Most such locks protect a particular data structure in shared memory. |
|
The server process is waiting for a timeout to expire. |
|
Description |
|
Waiting in main loop of archiver process. |
|
Waiting in main loop of autovacuum launcher process. |
|
Waiting in background writer process, hibernating. |
|
Waiting in main loop of background writer process. |
|
Waiting in main loop of checkpointer process. |
|
Waiting in main loop of logical replication apply process. |
|
Waiting in main loop of logical replication launcher process. |
|
Waiting in main loop of startup process for WAL to arrive, during streaming recovery. |
|
Waiting in main loop of syslogger process. |
|
Waiting in main loop of WAL receiver process. |
|
Waiting in main loop of WAL sender process. |
|
Waiting in main loop of WAL writer process. |
|
Description |
|
Waiting to acquire an exclusive pin on a buffer. |
|
Description |
|
Waiting to read data from the client. |
|
Waiting to write data to the client. |
|
Waiting to read data from the client while establishing a GSSAPI session. |
|
Waiting in WAL receiver to establish connection to remote server. |
|
Waiting in WAL receiver to receive data from remote server. |
|
Waiting for SSL while attempting connection. |
|
Waiting for WAL to be flushed in WAL sender process. |
|
Waiting for any activity when processing replies from WAL receiver in WAL sender process. |
|
Description |
|
Waiting in an extension. |
|
Description |
|
Waiting for base backup to read from a file. |
|
Waiting for a read from a buffered file. |
|
Waiting for a write to a buffered file. |
|
Waiting for a buffered file to be truncated. |
|
Waiting for a read from the |
|
Waiting for the |
|
Waiting for an update to the |
|
Waiting for a write to the |
|
Waiting for a write to update the |
|
Waiting for a read during a file copy operation. |
|
Waiting for a write during a file copy operation. |
|
Waiting to fill a dynamic shared memory backing file with zeroes. |
|
Waiting for a relation data file to be extended. |
|
Waiting for a relation data file to reach durable storage. |
|
Waiting for an immediate synchronization of a relation data file to durable storage. |
|
Waiting for an asynchronous prefetch from a relation data file. |
|
Waiting for a read from a relation data file. |
|
Waiting for changes to a relation data file to reach durable storage. |
|
Waiting for a relation data file to be truncated. |
|
Waiting for a write to a relation data file. |
|
Waiting for a read while adding a line to the data directory lock file. |
|
Waiting for data to reach durable storage while adding a line to the data directory lock file. |
|
Waiting for a write while adding a line to the data directory lock file. |
|
Waiting to read while creating the data directory lock file. |
|
Waiting for data to reach durable storage while creating the data directory lock file. |
|
Waiting for a write while creating the data directory lock file. |
|
Waiting for a read during recheck of the data directory lock file. |
|
Waiting for logical rewrite mappings to reach durable storage during a checkpoint. |
|
Waiting for mapping data to reach durable storage during a logical rewrite. |
|
Waiting for a write of mapping data during a logical rewrite. |
|
Waiting for logical rewrite mappings to reach durable storage. |
|
Waiting for truncate of mapping data during a logical rewrite. |
|
Waiting for a write of logical rewrite mappings. |
|
Waiting for a read of the relation map file. |
|
Waiting for the relation map file to reach durable storage. |
|
Waiting for a write to the relation map file. |
|
Waiting for a read during reorder buffer management. |
|
Waiting for a write during reorder buffer management. |
|
Waiting for a read of a logical mapping during reorder buffer management. |
|
Waiting for a read from a replication slot control file. |
|
Waiting for a replication slot control file to reach durable storage while restoring it to memory. |
|
Waiting for a replication slot control file to reach durable storage. |
|
Waiting for a write to a replication slot control file. |
|
Waiting for SLRU data to reach durable storage during a checkpoint or database shutdown. |
|
Waiting for a read of an SLRU page. |
|
Waiting for SLRU data to reach durable storage following a page write. |
|
Waiting for a write of an SLRU page. |
|
Waiting for a read of a serialized historical catalog snapshot. |
|
Waiting for a serialized historical catalog snapshot to reach durable storage. |
|
Waiting for a write of a serialized historical catalog snapshot. |
|
Waiting for a timeline history file received via streaming replication to reach durable storage. |
|
Waiting for a write of a timeline history file received via streaming replication. |
|
Waiting for a read of a timeline history file. |
|
Waiting for a newly created timeline history file to reach durable storage. |
|
Waiting for a write of a newly created timeline history file. |
|
Waiting for a read of a two phase state file. |
|
Waiting for a two phase state file to reach durable storage. |
|
Waiting for a write of a two phase state file. |
|
Waiting for the version file to be written while creating a database. |
|
Waiting for WAL to reach durable storage during bootstrapping. |
|
Waiting for a write of a WAL page during bootstrapping. |
|
Waiting for a read when creating a new WAL segment by copying an existing one. |
|
Waiting for a new WAL segment created by copying an existing one to reach durable storage. |
|
Waiting for a write when creating a new WAL segment by copying an existing one. |
|
Waiting for a newly initialized WAL file to reach durable storage. |
|
Waiting for a write while initializing a new WAL file. |
|
Waiting for a read from a WAL file. |
|
Waiting for a read from a timeline history file during a walsender timeline command. |
|
Waiting for a WAL file to reach durable storage. |
|
Waiting for data to reach durable storage while assigning a new WAL sync method. |
|
Waiting for a write to a WAL file. |
|
Description |
|
Waiting for subplan nodes of an |
|
Waiting for archive_cleanup_command to complete. |
|
Waiting for archive_command to complete. |
|
Waiting for the termination of another backend. |
|
Waiting for WAL files required for a backup to be successfully archived. |
|
Waiting for background worker to shut down. |
|
Waiting for background worker to start up. |
|
Waiting for the page number needed to continue a parallel B-tree scan to become available. |
|
Waiting for buffer I/O to complete. |
|
Waiting for a checkpoint to complete. |
|
Waiting for a checkpoint to start. |
|
Waiting for activity from a child process while executing a |
|
Waiting for an elected Parallel Hash participant to allocate a hash table. |
|
Waiting to elect a Parallel Hash participant to allocate a hash table. |
|
Waiting for other Parallel Hash participants to finish loading a hash table. |
|
Waiting for an elected Parallel Hash participant to allocate the initial hash table. |
|
Waiting to elect a Parallel Hash participant to allocate the initial hash table. |
|
Waiting for other Parallel Hash participants to finish hashing the inner relation. |
|
Waiting for other Parallel Hash participants to finish partitioning the outer relation. |
|
Waiting for an elected Parallel Hash participant to allocate more batches. |
|
Waiting to elect a Parallel Hash participant to decide on future batch growth. |
|
Waiting to elect a Parallel Hash participant to allocate more batches. |
|
Waiting for an elected Parallel Hash participant to decide on future batch growth. |
|
Waiting for other Parallel Hash participants to finish repartitioning. |
|
Waiting for an elected Parallel Hash participant to finish allocating more buckets. |
|
Waiting to elect a Parallel Hash participant to allocate more buckets. |
|
Waiting for other Parallel Hash participants to finish inserting tuples into new buckets. |
|
Waiting for a logical replication remote server to send data for initial table synchronization. |
|
Waiting for a logical replication remote server to change state. |
|
Waiting for another process to be attached to a shared message queue. |
|
Waiting to write a protocol message to a shared message queue. |
|
Waiting to receive bytes from a shared message queue. |
|
Waiting to send bytes to a shared message queue. |
|
Waiting for parallel bitmap scan to become initialized. |
|
Waiting for parallel |
|
Waiting for parallel workers to finish computing. |
|
Waiting for the group leader to clear the transaction ID at end of a parallel operation. |
|
Waiting for a barrier event to be processed by all backends. |
|
Waiting for standby promotion. |
|
Waiting for recovery conflict resolution for a vacuum cleanup. |
|
Waiting for recovery conflict resolution for dropping a tablespace. |
|
Waiting for recovery_end_command to complete. |
|
Waiting for recovery to be resumed. |
|
Waiting for a replication origin to become inactive so it can be dropped. |
|
Waiting for a replication slot to become inactive so it can be dropped. |
|
Waiting for restore_command to complete. |
|
Waiting to obtain a valid snapshot for a |
|
Waiting for confirmation from a remote server during synchronous replication. |
|
Waiting for the WAL receiver to exit. |
|
Waiting for startup process to send initial data for streaming replication. |
|
Waiting for the group leader to update transaction status at end of a parallel operation. |
|
Description |
|
Waiting to acquire an advisory user lock. |
|
Waiting to extend a relation. |
|
Waiting to update |
|
Waiting to acquire a lock on a non-relation database object. |
|
Waiting to acquire a lock on a page of a relation. |
|
Waiting to acquire a lock on a relation. |
|
Waiting to acquire a speculative insertion lock. |
|
Waiting for a transaction to finish. |
|
Waiting to acquire a lock on a tuple. |
|
Waiting to acquire a user lock. |
|
Waiting to acquire a virtual transaction ID lock. |
|
Description |
|
Waiting to manage an extension’s space allocation in shared memory. |
|
Waiting to update the |
|
Waiting to read or update the current state of autovacuum workers. |
|
Waiting to ensure that a table selected for autovacuum still needs vacuuming. |
|
Waiting to read or update background worker state. |
|
Waiting to read or update vacuum-related information for a B-tree index. |
|
Waiting to access a data page in memory. |
|
Waiting to associate a data block with a buffer in the buffer pool. |
|
Waiting to manage fsync requests. |
|
Waiting to read or update the last value set for a transaction commit timestamp. |
|
Waiting for I/O on a commit timestamp SLRU buffer. |
|
Waiting to access the commit timestamp SLRU cache. |
|
Waiting to read or update the |
|
Waiting to read or update dynamic shared memory allocation information. |
|
Waiting to read or update a process' fast-path lock information. |
|
Waiting to read or update information about “heavyweight” locks. |
|
Waiting to read or update the state of logical replication workers. |
|
Waiting to read or update shared multixact state. |
|
Waiting for I/O on a multixact member SLRU buffer. |
|
Waiting to access the multixact member SLRU cache. |
|
Waiting for I/O on a multixact offset SLRU buffer. |
|
Waiting to access the multixact offset SLRU cache. |
|
Waiting to read or truncate multixact information. |
|
Waiting for I/O on a |
|
Waiting to read or update |
|
Waiting to update limit on |
|
Waiting to access the |
|
Waiting to allocate a new OID. |
|
Waiting to read or update old snapshot control information. |
|
Waiting to choose the next subplan during Parallel Append plan execution. |
|
Waiting to synchronize workers during Parallel Hash Join plan execution. |
|
Waiting for parallel query dynamic shared memory allocation. |
|
Waiting for parallel query dynamic shared memory allocation. |
|
Waiting to access a parallel query’s information about composite types. |
|
Waiting to access a parallel query’s information about type modifiers that identify anonymous record types. |
|
Waiting to access the list of predicate locks held by the current serializable transaction during a parallel query. |
|
Waiting to access predicate lock information used by serializable transactions. |
|
Waiting to access the shared per-process data structures (typically, to get a snapshot or report a session’s transaction ID). |
|
Waiting to read or update a |
|
Waiting to read or update a |
|
Waiting to create, drop or use a replication origin. |
|
Waiting to read or update the progress of one replication origin. |
|
Waiting to allocate or free a replication slot. |
|
Waiting to read or update replication slot state. |
|
Waiting for I/O on a replication slot. |
|
Waiting for I/O on a serializable transaction conflict SLRU buffer. |
|
Waiting to access the list of finished serializable transactions. |
|
Waiting to access the list of predicate locks held by serializable transactions. |
|
Waiting for stats dynamic shared memory allocator access |
|
Waiting for stats shared memory hash table access |
|
Waiting for shared memory stats data access |
|
Waiting to read or update information about serializable transactions. |
|
Waiting to access the serializable transaction conflict SLRU cache. |
|
Waiting to access a shared TID bitmap during a parallel bitmap index scan. |
|
Waiting to access a shared tuple store during parallel query. |
|
Waiting to find or allocate space in shared memory. |
|
Waiting to retrieve messages from the shared catalog invalidation queue. |
|
Waiting to add a message to the shared catalog invalidation queue. |
|
Waiting for I/O on a sub-transaction SLRU buffer. |
|
Waiting to access the sub-transaction SLRU cache. |
|
Waiting to read or update information about the state of synchronous replication. |
|
Waiting to select the starting location of a synchronized table scan. |
|
Waiting to create or drop a tablespace. |
|
Waiting to read or update the state of prepared transactions. |
|
Waiting to replace a page in WAL buffers. |
|
Waiting to insert WAL data into a memory buffer. |
|
Waiting for WAL buffers to be written to disk. |
|
Waiting to update limits on transaction id and multixact consumption. |
|
Waiting for I/O on a transaction status SLRU buffer. |
|
Waiting to access the transaction status SLRU cache. |
|
Waiting to execute |
|
Waiting to allocate a new transaction ID. |
|
Description |
|
Waiting during base backup when throttling activity. |
|
Waiting between writes while performing a checkpoint. |
|
Waiting due to a call to |
|
Waiting to apply WAL during recovery because of a delay setting. |
|
Waiting during recovery when WAL data is not available from any source ( |
|
Waiting while sending synchronization requests to the checkpointer, because the request queue is full. |
|
Waiting in a cost-based vacuum delay point. |
|
Waiting to acquire an exclusive lock to truncate off any empty pages at the end of a table vacuumed. |
Here is an example of how wait events can be viewed:
SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event is NOT NULL;
pid | wait_event_type | wait_event
------+-----------------+------------
2540 | Lock | relation
6644 | LWLock | ProcArray
(2 rows)
1.2.4. pg_stat_replication
The pg_stat_replication view will contain one row per WAL sender process, showing statistics about replication to that sender’s connected standby server. Only directly connected standbys are listed; no information is available about downstream standby servers.
Column |
Type |
Description |
|
|
Process ID of a WAL sender process |
|
|
OID of the user logged into this WAL sender process |
|
|
Name of the user logged into this WAL sender process |
|
|
Name of the application that is connected to this WAL sender |
|
|
IP address of the client connected to this WAL sender. If this field is null, it indicates that the client is connected via a Unix socket on the server machine. |
|
|
Host name of the connected client, as reported by a reverse DNS lookup of |
|
|
TCP port number that the client is using for communication with this WAL sender, or |
|
|
Time when this process was started, i.e., when the client connected to this WAL sender |
|
|
This standby’s |
|
|
Current WAL sender state. Possible values are: |
|
|
Last write-ahead log location sent on this connection |
|
|
Last write-ahead log location written to disk by this standby server |
|
|
Last write-ahead log location flushed to disk by this standby server |
|
|
Last write-ahead log location replayed into the database on this standby server |
|
|
Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written it (but not yet flushed it or applied it). This can be used to gauge the delay that |
|
|
Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written and flushed it (but not yet applied it). This can be used to gauge the delay that |
|
|
Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written, flushed and applied it. This can be used to gauge the delay that |
|
|
Priority of this standby server for being chosen as the synchronous standby in a priority-based synchronous replication. This has no effect in a quorum-based synchronous replication. |
|
|
Synchronous state of this standby server. Possible values are: |
|
|
Send time of last reply message received from standby server |
The lag times reported in the pg_stat_replication view are measurements of the time taken for recent WAL to be written, flushed and replayed and for the sender to know about it. These times represent the commit delay that was (or would have been) introduced by each synchronous commit level, if the remote server was configured as a synchronous standby. For an asynchronous standby, the replay_lag column approximates the delay before recent transactions became visible to queries. If the standby server has entirely caught up with the sending server and there is no more WAL activity, the most recently measured lag times will continue to be displayed for a short time and then show NULL.
Lag times work automatically for physical replication. Logical decoding plugins may optionally emit tracking messages; if they do not, the tracking mechanism will simply display NULL lag.
1.2.5. pg_stat_replication_slots
The pg_stat_replication_slots view will contain one row per logical replication slot, showing statistics about its usage.
Column |
Type |
Description |
|
|
A unique, cluster-wide identifier for the replication slot |
|
|
Number of transactions spilled to disk once the memory used by logical decoding to decode changes from WAL has exceeded |
|
|
Number of times transactions were spilled to disk while decoding changes from WAL for this slot. This counter is incremented each time a transaction is spilled, and the same transaction may be spilled multiple times. |
|
|
Amount of decoded transaction data spilled to disk while performing decoding of changes from WAL for this slot. This and other spill counters can be used to gauge the I/O which occurred during logical decoding and allow tuning |
|
|
Number of in-progress transactions streamed to the decoding output plugin after the memory used by logical decoding to decode changes from WAL for this slot has exceeded |
|
|
Number of times in-progress transactions were streamed to the decoding output plugin while decoding changes from WAL for this slot. This counter is incremented each time a transaction is streamed, and the same transaction may be streamed multiple times. |
|
|
Amount of transaction data decoded for streaming in-progress transactions to the decoding output plugin while decoding changes from WAL for this slot. This and other streaming counters for this slot can be used to tune |
|
|
Number of decoded transactions sent to the decoding output plugin for this slot. This counts top-level transactions only, and is not incremented for subtransactions. Note that this includes the transactions that are streamed and/or spilled. |
|
|
Amount of transaction data decoded for sending transactions to the decoding output plugin while decoding changes from WAL for this slot. Note that this includes data that is streamed and/or spilled. |
|
|
Time at which these statistics were last reset |
1.2.6. pg_stat_wal_receiver
The pg_stat_wal_receiver view will contain only one row, showing statistics about the WAL receiver from that receiver’s connected server.
Column |
Type |
Description |
|
|
Process ID of the WAL receiver process |
|
|
Activity status of the WAL receiver process |
|
|
First write-ahead log location used when WAL receiver is started |
|
|
First timeline number used when WAL receiver is started |
|
|
Last write-ahead log location already received and written to disk, but not flushed. This should not be used for data integrity checks. |
|
|
Last write-ahead log location already received and flushed to disk, the initial value of this field being the first log location used when WAL receiver is started |
|
|
Timeline number of last write-ahead log location received and flushed to disk, the initial value of this field being the timeline number of the first log location used when WAL receiver is started |
|
|
Send time of last message received from origin WAL sender |
|
|
Receipt time of last message received from origin WAL sender |
|
|
Last write-ahead log location reported to origin WAL sender |
|
|
Time of last write-ahead log location reported to origin WAL sender |
|
|
Replication slot name used by this WAL receiver |
|
|
Host of the IvorySQL instance this WAL receiver is connected to. This can be a host name, an IP address, or a directory path if the connection is via Unix socket. (The path case can be distinguished because it will always be an absolute path, beginning with |
|
|
Port number of the IvorySQL instance this WAL receiver is connected to. |
|
|
Connection string used by this WAL receiver, with security-sensitive fields obfuscated. |
1.2.7. pg_stat_recovery_prefetch
The pg_stat_recovery_prefetch view will contain only one row. The columns wal_distance, block_distance and io_depth show current values, and the other columns show cumulative counters that can be reset with the pg_stat_reset_shared function.
Column |
Type |
Description |
stats_reset |
timestamp with time zone |
Time at which these statistics were last reset |
prefetch |
bigint |
Number of blocks prefetched because they were not in the buffer pool |
hit |
bigint |
Number of blocks not prefetched because they were already in the buffer pool |
skip_init |
bigint |
Number of blocks not prefetched because they would be zero-initialized |
skip_new |
bigint |
Number of blocks not prefetched because they didn’t exist yet |
skip_fpw |
bigint |
Number of blocks not prefetched because a full page image was included in the WAL |
skip_rep |
bigint |
Number of blocks not prefetched because they were already recently prefetched |
wal_distance |
int |
How many bytes ahead the prefetcher is looking |
block_distance |
int |
How many blocks ahead the prefetcher is looking |
io_depth |
int |
How many prefetches have been initiated but are not yet known to have completed |
1.2.8. pg_stat_subscription
Column |
Type |
Description |
|
|
OID of the subscription |
|
|
Name of the subscription |
|
|
Process ID of the subscription worker process |
|
|
OID of the relation that the worker is synchronizing; null for the main apply worker |
|
|
Last write-ahead log location received, the initial value of this field being 0 |
|
|
Send time of last message received from origin WAL sender |
|
|
Receipt time of last message received from origin WAL sender |
|
|
Last write-ahead log location reported to origin WAL sender |
|
|
Time of last write-ahead log location reported to origin WAL sender |
1.2.9. pg_stat_subscription_stats
The pg_stat_subscription_stats view will contain one row per subscription.
Column |
Type |
Description |
|
|
OID of the subscription |
|
|
Name of the subscription |
|
|
Number of times an error occurred while applying changes |
|
|
Number of times an error occurred during the initial table synchronization |
|
|
Time at which these statistics were last reset |
1.2.10. pg_stat_ssl
The pg_stat_ssl view will contain one row per backend or WAL sender process, showing statistics about SSL usage on this connection. It can be joined to pg_stat_activity or pg_stat_replication on the pid column to get more details about the connection.
Column |
Type |
Description |
|
|
Process ID of a backend or WAL sender process |
|
|
True if SSL is used on this connection |
|
|
Version of SSL in use, or NULL if SSL is not in use on this connection |
|
|
Name of SSL cipher in use, or NULL if SSL is not in use on this connection |
|
|
Number of bits in the encryption algorithm used, or NULL if SSL is not used on this connection |
|
|
Distinguished Name (DN) field from the client certificate used, or NULL if no client certificate was supplied or if SSL is not in use on this connection. This field is truncated if the DN field is longer than |
|
|
Serial number of the client certificate, or NULL if no client certificate was supplied or if SSL is not in use on this connection. The combination of certificate serial number and certificate issuer uniquely identifies a certificate (unless the issuer erroneously reuses serial numbers) |
|
|
DN of the issuer of the client certificate, or NULL if no client certificate was supplied or if SSL is not in use on this connection. This field is truncated like |
1.2.11. pg_stat_gssapi
The pg_stat_gssapi view will contain one row per backend, showing information about GSSAPI usage on this connection. It can be joined to pg_stat_activity or pg_stat_replication on the pid column to get more details about the connection.
Column |
Type |
Description |
|
|
Process ID of a backend |
|
|
True if GSSAPI authentication was used for this connection |
|
|
Principal used to authenticate this connection, or NULL if GSSAPI was not used. Truncated to |
|
|
True if GSSAPI encryption is in use on this connection |
1.2.12. pg_stat_archiver
The pg_stat_archiver view will always have a single row, containing data about the archiver process of the cluster.
Column |
Type |
Description |
|
|
Number of WAL files that have been successfully archived |
|
|
Name of the WAL file most recently successfully archived |
|
|
Time of the most recent successful archive operation |
|
|
Number of failed attempts for archiving WAL files |
|
|
Name of the WAL file of the most recent failed archival operation |
|
|
Time of the most recent failed archival operation |
|
|
Time at which these statistics were last reset |
Normally, WAL files are archived in order, oldest to newest, but that is not guaranteed, and does not hold under special circumstances like when promoting a standby or after crash recovery. Therefore it is not safe to assume that all files older than last_archived_wal have also been successfully archived.
1.2.13. pg_stat_bgwriter
The pg_stat_bgwriter view will always have a single row, containing global data for the cluster.
Column |
Type |
Description |
|
|
Number of scheduled checkpoints that have been performed |
|
|
Number of requested checkpoints that have been performed |
|
|
Total amount of time that has been spent in the portion of checkpoint processing where files are written to disk, in milliseconds |
|
|
Total amount of time that has been spent in the portion of checkpoint processing where files are synchronized to disk, in milliseconds |
|
|
Number of buffers written during checkpoints |
|
|
Number of buffers written by the background writer |
|
|
Number of times the background writer stopped a cleaning scan because it had written too many buffers |
|
|
Number of buffers written directly by a backend |
|
|
Number of times a backend had to execute its own |
|
|
Number of buffers allocated |
|
|
Time at which these statistics were last reset |
1.2.14. pg_stat_wal
The pg_stat_wal view will always have a single row, containing data about WAL activity of the cluster.
Column |
Type |
Description |
|
|
Process ID of backend. |
|
|
OID of the database to which this backend is connected. |
|
|
Name of the database to which this backend is connected. |
|
|
OID of the table on which the index is being created. |
|
|
OID of the index being created or reindexed. During a non-concurrent |
|
|
The command that is running: |
|
|
Current processing phase of index creation. See [Table 1.39](https://www.postgresql.org/docs/current/progress-reporting.html#CREATE-INDEX-PHASES). |
|
|
Total number of lockers to wait for, when applicable. |
|
|
Number of lockers already waited for. |
|
|
Process ID of the locker currently being waited for. |
|
|
Total number of blocks to be processed in the current phase. |
|
|
Number of blocks already processed in the current phase. |
|
|
Total number of tuples to be processed in the current phase. |
|
|
Number of tuples already processed in the current phase. |
|
|
When creating an index on a partitioned table, this column is set to the total number of partitions on which the index is to be created. This field is |
|
|
When creating an index on a partitioned table, this column is set to the number of partitions on which the index has been created. This field is |
1.2.15. pg_stat_database
The pg_stat_database view will contain one row for each database in the cluster, plus one for shared objects, showing database-wide statistics.
Column |
Type |
Description |
|
|
OID of this database, or 0 for objects belonging to a shared relation |
|
|
Name of this database, or |
|
|
Number of backends currently connected to this database, or |
|
|
Number of transactions in this database that have been committed |
|
|
Number of transactions in this database that have been rolled back |
|
|
Number of disk blocks read in this database |
|
|
Number of times disk blocks were found already in the buffer cache, so that a read was not necessary (this only includes hits in the IvorySQL buffer cache, not the operatinsystem’s file system cache) |
|
|
Number of live rows fetched by sequential scans and index entries returned by index scans in this database |
|
|
Number of live rows fetched by index scans in this database |
|
|
Number of rows inserted by queries in this database |
|
|
Number of rows updated by queries in this database |
|
|
Number of rows deleted by queries in this database |
|
|
Number of queries canceled due to conflicts with recovery in this database. (Conflicts occur only on standby servers; seepg_stat_database_conflicts](https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-DATABASE-CONFLICTS-VIEW) for details.) |
|
|
Number of temporary files created by queries in this database. All temporary files are counted, regardless of why the temporary file was created (e.g., sorting or hashing)and regardless of the [log_temp_files](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-TEMP-FILES) setting. |
|
|
Total amount of data written to temporary files by queries in this database. All temporary files are counted, regardless of why the temporary file was created, and regardlesof the [log_temp_files](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-TEMP-FILES) setting. |
|
|
Number of deadlocks detected in this database |
|
|
Number of data page checksum failures detected in this database (or on a shared object), or NULL if data checksums are not enabled. |
|
|
Time at which the last data page checksum failure was detected in this database (or on a shared object), or NULL if data checksums are noenabled. |
|
|
Time spent reading data file blocks by backends in this database, in milliseconds (if [track_io_timing](https://www.postgresql.org/docs/currenruntime-config-statistics.html#GUC-TRACK-IO-TIMING) is enabled, otherwise zero) |
|
|
Time spent writing data file blocks by backends in this database, in milliseconds (if [track_io_timing](https://www.postgresql.org/docs/currenruntime-config-statistics.html#GUC-TRACK-IO-TIMING) is enabled, otherwise zero) |
|
|
Time spent by database sessions in this database, in milliseconds (note that statistics are only updated when the state of a session changes, so if sessions havbeen idle for a long time, this idle time won’t be included) |
|
|
Time spent executing SQL statements in this database, in milliseconds (this corresponds to the states |
|
|
Time spent idling while in a transaction in this database, in milliseconds (this corresponds to the states |
|
|
Total number of sessions established to this database |
|
|
Number of database sessions to this database that were terminated because connection to the client was lost |
|
|
Number of database sessions to this database that were terminated by fatal errors |
|
|
Number of database sessions to this database that were terminated by operator intervention |
|
|
Time at which these statistics were last reset |
1.2.16. pg_stat_database_conflicts
The pg_stat_database_conflicts view will contain one row per database, showing database-wide statistics about query cancels occurring due to conflicts with recovery on standby servers. This view will only contain information on standby servers, since conflicts do not occur on primary servers.
Column |
Type |
Description |
|
|
OID of a database |
|
|
Name of this database |
|
|
Number of queries in this database that have been canceled due to dropped tablespaces |
|
|
Number of queries in this database that have been canceled due to lock timeouts |
|
|
Number of queries in this database that have been canceled due to old snapshots |
|
|
Number of queries in this database that have been canceled due to pinned buffers |
|
|
Number of queries in this database that have been canceled due to deadlocks |
1.2.17. pg_stat_all_tables
The pg_stat_all_tables view will contain one row for each table in the current database (including TOAST tables), showing statistics about accesses to that specific table. The pg_stat_user_tables and pg_stat_sys_tables views contain the same information, but filtered to only show user and system tables respectively.
Column |
Type |
Description |
|
|
OID of a table |
|
|
Name of the schema that this table is in |
|
|
Name of this table |
|
|
Number of sequential scans initiated on this table |
|
|
Number of live rows fetched by sequential scans |
|
|
Number of index scans initiated on this table |
|
|
Number of live rows fetched by index scans |
|
|
Number of rows inserted |
|
|
Number of rows updated (includes HOT updated rows) |
|
|
Number of rows deleted |
|
|
Number of rows HOT updated (i.e., with no separate index update required) |
|
|
Estimated number of live rows |
|
|
Estimated number of dead rows |
|
|
Estimated number of rows modified since this table was last analyzed |
|
|
Estimated number of rows inserted since this table was last vacuumed |
|
|
Last time at which this table was manually vacuumed (not counting VACUUM FULL) |
|
|
Last time at which this table was vacuumed by the autovacuum daemon |
|
|
Last time at which this table was manually analyzed |
|
|
Last time at which this table was analyzed by the autovacuum daemon |
|
|
Number of times this table has been manually vacuumed (not counting VACUUM FULL) |
|
|
Number of times this table has been vacuumed by the autovacuum daemon |
|
|
Number of times this table has been manually analyzed |
|
|
Number of times this table has been analyzed by the autovacuum daemon |
1.2.18. pg_stat_all_indexes
The pg_stat_all_indexes view will contain one row for each index in the current database, showing statistics about accesses to that specific index. The pg_stat_user_indexes and pg_stat_sys_indexes views contain the same information, but filtered to only show user and system indexes respectively.
Column |
Type |
Description |
|
|
OID of the table for this index |
|
|
OID of this index |
|
|
Name of the schema this index is in |
|
|
Name of the table for this index |
|
|
Name of this index |
|
|
Number of index scans initiated on this index |
|
|
Number of index entries returned by scans on this index |
|
|
Number of live table rows fetched by simple index scans using this index |
Indexes can be used by simple index scans, “bitmap” index scans, and the optimizer. In a bitmap scan the output of several indexes can be combined via AND or OR rules, so it is difficult to associate individual heap row fetches with specific indexes when a bitmap scan is used. Therefore, a bitmap scan increments the pg_stat_all_indexes.idx_tup_read count(s) for the index(es) it uses, and it increments the pg_stat_all_tables.idx_tup_fetch count for the table, but it does not affect pg_stat_all_indexes.idx_tup_fetch. The optimizer also accesses indexes to check for supplied constants whose values are outside the recorded range of the optimizer statistics because the optimizer statistics might be stale.
1.2.19. pg_statio_all_tables
The pg_statio_all_tables view will contain one row for each table in the current database (including TOAST tables), showing statistics about I/O on that specific table. The pg_statio_user_tables and pg_statio_sys_tables views contain the same information, but filtered to only show user and system tables respectively.
Column |
Type |
Description |
|
|
OID of a table |
|
|
Name of the schema that this table is in |
|
|
Name of this table |
|
|
Number of disk blocks read from this table |
|
|
Number of buffer hits in this table |
|
|
Number of disk blocks read from all indexes on this table |
|
|
Number of buffer hits in all indexes on this table |
|
|
Number of disk blocks read from this table’s TOAST table (if any) |
|
|
Number of buffer hits in this table’s TOAST table (if any) |
|
|
Number of disk blocks read from this table’s TOAST table indexes (if any) |
|
|
Number of buffer hits in this table’s TOAST table indexes (if any) |
1.2.20. pg_statio_all_indexes
The pg_statio_all_indexes view will contain one row for each index in the current database, showing statistics about I/O on that specific index. The pg_statio_user_indexes and pg_statio_sys_indexes views contain the same information, but filtered to only show user and system indexes respectively.
Column |
Type |
Description |
|
|
OID of the table for this index |
|
|
OID of this index |
|
|
Name of the schema this index is in |
|
|
Name of the table for this index |
|
|
Name of this index |
|
|
Number of disk blocks read from this index |
|
|
Number of buffer hits in this index |
1.2.21. pg_statio_all_sequences
The pg_statio_all_sequences view will contain one row for each sequence in the current database, showing statistics about I/O on that specific sequence.
Column |
Type |
Description |
|
|
OID of a sequence |
|
|
Name of the schema this sequence is in |
|
|
Name of this sequence |
|
|
Number of disk blocks read from this sequence |
|
|
Number of buffer hits in this sequence |
1.2.22. pg_stat_user_functions
The pg_stat_user_functions view will contain one row for each tracked function, showing statistics about executions of that function. The track_functions parameter controls exactly which functions are tracked.
Column |
Type |
Description |
|
|
OID of a function |
|
|
Name of the schema this function is in |
|
|
Name of this function |
|
|
Number of times this function has been called |
|
|
Total time spent in this function and all other functions called by it, in milliseconds |
|
|
Total time spent in this function itself, not including other functions called by it, in milliseconds |
1.2.23. pg_stat_slru
IvorySQL accesses certain on-disk information via SLRU (simple least-recently-used) caches. The pg_stat_slru view will contain one row for each tracked SLRU cache, showing statistics about access to cached pages.
Column |
Type |
Description |
|
|
Name of the SLRU |
|
|
Number of blocks zeroed during initializations |
|
|
Number of times disk blocks were found already in the SLRU, so that a read was not necessary (this only includes hits in the SLRU, not the operating system’s file system cache) |
|
|
Number of disk blocks read for this SLRU |
|
|
Number of disk blocks written for this SLRU |
|
|
Number of blocks checked for existence for this SLRU |
|
|
Number of flushes of dirty data for this SLRU |
|
|
Number of truncates for this SLRU |
|
|
Time at which these statistics were last reset |
1.2.24. Statistics Functions
Other ways of looking at the statistics can be set up by writing queries that use the same underlying statistics access functions used by the standard views shown above. For details such as the functions' names, consult the definitions of the standard views. (For example, in psql you could issue \d+ pg_stat_activity.) The access functions for per-database statistics take a database OID as an argument to identify which database to report on. The per-table and per-index functions take a table or index OID. The functions for per-function statistics take a function OID. Note that only tables, indexes, and functions in the current database can be seen with these functions.
Function |
Description |
|
Returns the process ID of the server process attached to the current session. |
|
Returns a record of information about the backend with the specified process ID, or one record for each active backend in the system if |
|
Returns the timestamp of the current statistics snapshot, or NULL if no statistics snapshot has been taken. A snapshot is taken the first time cumulative statistics are accessed in a transaction if |
|
Discards the current statistics snapshot or cached information. |
|
Resets all statistics counters for the current database to zero. This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. |
|
Resets some cluster-wide statistics counters to zero, depending on the argument. The argument can be |
|
Resets statistics for a single table or index in the current database or shared across all databases in the cluster to zero. This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. |
|
Resets statistics for a single function in the current database to zero. This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. |
|
Resets statistics to zero for a single SLRU cache, or for all SLRUs in the cluster. If the argument is NULL, all counters shown in the |
|
Resets statistics of the replication slot defined by the argument. If the argument is |
|
Resets statistics for a single subscription shown in the |
pg_stat_get_activity, the underlying function of the pg_stat_activity view, returns a set of records containing all the available information about each backend process. Sometimes it may be more convenient to obtain just a subset of this information. In such cases, an older set of per-backend statistics access functions can be used; These access functions use a backend ID number, which ranges from one to the number of currently active backends. The function pg_stat_get_backend_idset provides a convenient way to generate one row for each active backend for invoking these functions. For example, to show the PIDs and current queries of all backends:
SELECT pg_stat_get_backend_pid(s.backendid) AS pid,
pg_stat_get_backend_activity(s.backendid) AS query
FROM (SELECT pg_stat_get_backend_idset() AS backendid) AS s;
Function |
Description |
|
Returns the set of currently active backend ID numbers (from 1 to the number of active backends). |
|
Returns the text of this backend’s most recent query. |
|
Returns the time when the backend’s most recent query was started. |
|
Returns the IP address of the client connected to this backend. |
|
Returns the TCP port number that the client is using for communication. |
|
Returns the OID of the database this backend is connected to. |
|
Returns the process ID of this backend. |
|
Returns the time when this process was started. |
|
Returns the OID of the user logged into this backend. |
|
Returns the wait event type name if this backend is currently waiting, otherwise NULL. |
|
Returns the wait event name if this backend is currently waiting, otherwise NULL. |
|
Returns the time when the backend’s current transaction was started. |
1.3. View Locks
-
Another useful tool for monitoring database activity is the
pg_lockssystem table. It allows the database administrator to view information about the outstanding locks in the lock manager. For example, this capability can be used to:-
View all the locks currently outstanding, all the locks on relations in a particular database, all the locks on a particular relation, or all the locks held by a particular IvorySQL session.
-
Determine the relation in the current database with the most ungranted locks (which might be a source of contention among database clients).
-
Determine the effect of lock contention on overall database performance, as well as the extent to which contention varies with overall database traffic.
-
1.4. Progress Reporting
IvorySQL has the ability to report the progress of certain commands during command execution. Currently, the only commands which support progress reporting are ANALYZE, CLUSTER, CREATE INDEX, VACUUM, COPY, and BASE_BACKUP (i.e., replication command that pg_basebackup issues to take a base backup). This may be expanded in the future.
1.4.1. ANALYZE Progress Reporting
Whenever ANALYZE is running, the pg_stat_progress_analyze view will contain a row for each backend that is currently running that command. The tables below describe the information that will be reported and provide information about how to interpret it.
Column |
Type |
Description |
|
|
|
Process ID of backend. |
|
|
|
OID of the database to which this backend is connected. |
|
|
|
Name of the database to which this backend is connected. |
|
|
|
OID of the table being analyzed. |
|
|
|
Current processing phase. See Table 1.37. |
|
|
|
Total number of heap blocks that will be sampled. |
|
|
|
Number of heap blocks scanned. |
|
|
|
Number of extended statistics. |
|
|
|
Number of extended statistics computed. This counter only advances when the phase is |
|
|
|
Number of child tables. |
|
|
|
Number of child tables scanned. This counter only advances when the phase is |
|
|
|
OID of the child table currently being scanned. This field is only valid when the phase is |
Phase |
Description |
|
The command is preparing to begin scanning the heap. This phase is expected to be very brief. |
|
The command is currently scanning the table given by |
|
The command is currently scanning child tables to obtain sample rows. Columns |
|
The command is computing statistics from the sample rows obtained during the table scan. |
|
The command is computing extended statistics from the sample rows obtained during the table scan. |
|
The command is updating |
1.4.2. CREATE INDEX Progress Reporting
Whenever CREATE INDEX or REINDEX is running, the pg_stat_progress_create_index view will contain one row for each backend that is currently creating indexes. The tables below describe the information that will be reported and provide information about how to interpret it.
Column |
Type |
Description |
|
|
|
Process ID of backend. |
|
|
|
OID of the database to which this backend is connected. |
|
|
|
Name of the database to which this backend is connected. |
|
|
|
OID of the table on which the index is being created. |
|
|
|
OID of the index being created or reindexed. During a non-concurrent |
|
|
|
The command that is running: |
|
|
|
Current processing phase of index creation. See Table 1.39. |
|
|
|
Total number of lockers to wait for, when applicable. |
|
|
|
Number of lockers already waited for. |
|
|
|
Process ID of the locker currently being waited for. |
|
|
|
Total number of blocks to be processed in the current phase. |
|
|
|
Number of blocks already processed in the current phase. |
|
|
|
Total number of tuples to be processed in the current phase. |
|
|
|
Number of tuples already processed in the current phase. |
|
|
|
When creating an index on a partitioned table, this column is set to the total number of partitions on which the index is to be created. This field is |
|
|
|
When creating an index on a partitioned table, this column is set to the number of partitions on which the index has been created. This field is |
Phase |
Description |
|
|
|
|
|
The index is being built by the access method-specific code. In this phase, access methods that support progress reporting fill in their own progress data, and the subphase is indicated in this column. Typically, |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1.4.3. VACUUM Progress Reporting
Whenever VACUUM is running, the pg_stat_progress_vacuum view will contain one row for each backend (including autovacuum worker processes) that is currently vacuuming. The tables below describe the information that will be reported and provide information about how to interpret it. Progress for VACUUM FULL commands is reported via pg_stat_progress_cluster because both VACUUM FULL and CLUSTER rewrite the table, while regular VACUUM only modifies it in place.
Column |
Type |
Description |
|
|
|
Process ID of backend. |
|
|
|
OID of the database to which this backend is connected. |
|
|
|
Name of the database to which this backend is connected. |
|
|
|
OID of the table being vacuumed. |
|
|
|
Current processing phase of vacuum. |
|
|
|
Total number of heap blocks in the table. This number is reported as of the beginning of the scan; blocks added later will not be (and need not be) visited by this |
|
|
|
Number of heap blocks scanned. Because the visibility map is used to optimize scans, some blocks will be skipped without inspection; skipped blocks are included in this total, so that this number will eventually become equal to |
|
|
|
Number of heap blocks vacuumed. Unless the table has no indexes, this counter only advances when the phase is |
|
|
|
Number of completed index vacuum cycles. |
|
|
|
Number of dead tuples that we can store before needing to perform an index vacuum cycle, based on maintenance_work_mem. |
|
|
|
Number of dead tuples collected since the last index vacuum cycle. |
Phase |
Description |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1.4.4. CLUSTER Progress Reporting
Whenever CLUSTER or VACUUM FULL is running, the pg_stat_progress_cluster view will contain a row for each backend that is currently running either command. The tables below describe the information that will be reported and provide information about how to interpret it.
Column |
Type |
Description |
|
|
|
Process ID of backend. |
|
|
|
OID of the database to which this backend is connected. |
|
|
|
Name of the database to which this backend is connected. |
|
|
|
OID of the table being clustered. |
|
|
|
The command that is running. Either |
|
|
|
Current processing phase. See Table 1.43. |
|
|
|
If the table is being scanned using an index, this is the OID of the index being used; otherwise, it is zero. |
|
|
|
Number of heap tuples scanned. This counter only advances when the phase is |
|
|
|
Number of heap tuples written. This counter only advances when the phase is |
|
|
|
Total number of heap blocks in the table. This number is reported as of the beginning of |
|
|
|
Number of heap blocks scanned. This counter only advances when the phase is |
|
|
|
Number of indexes rebuilt. This counter only advances when the phase is |
Phase |
Description |
|
The command is preparing to begin scanning the heap. This phase is expected to be very brief. |
|
The command is currently scanning the table using a sequential scan. |
|
|
|
|
|
|
|
The command is currently swapping newly-built files into place. |
|
The command is currently rebuilding an index. |
|
The command is performing final cleanup. When this phase is completed, |
1.4.5. Base Backup Progress Reporting
Whenever an application like pg_basebackup is taking a base backup, the pg_stat_progress_basebackup view will contain a row for each WAL sender process that is currently running the BASE_BACKUP replication command and streaming the backup. The tables below describe the information that will be reported and provide information about how to interpret it.
Column TypeDescription |
|
|
|
|
|
|
Phase |
Description |
|
The WAL sender process is preparing to begin the backup. This phase is expected to be very brief. |
|
The WAL sender process is currently performing |
|
The WAL sender process is currently estimating the total amount of database files that will be streamed as a base backup. |
|
The WAL sender process is currently streaming database files as a base backup. |
|
The WAL sender process is currently performing |
|
The WAL sender process is currently transferring all WAL logs generated during the backup. This phase occurs after |
1.4.6. COPY Progress Reporting
Whenever COPY is running, the pg_stat_progress_copy view will contain one row for each backend that is currently running a COPY command. The table below describes the information that will be reported and provides information about how to interpret it.
Column TypeDescription |
|
|
|
|
|
|
|
|
|
|
1.5. Dynamic Tracing
IvorySQL provides facilities to support dynamic tracing of the database server. This allows an external utility to be called at specific points in the code and thereby trace execution.
A number of probes or trace points are already inserted into the source code. These probes are intended to be used by database developers and administrators. By default the probes are not compiled into IvorySQL; the user needs to explicitly tell the configure script to make the probes available.
Currently, the DTrace utility is supported, which, at the time of this writing, is available on Solaris, macOS, FreeBSD, NetBSD, and Oracle Linux. The SystemTap project for Linux provides a DTrace equivalent and can also be used. Supporting other dynamic tracing utilities is theoretically possible by changing the definitions for the macros in src/include/utils/probes.h.
1.5.1. Compiling for Dynamic Tracing
By default, probes are not available, so you will need to explicitly tell the configure script to make the probes available in IvorySQL. To include DTrace support specify --enable-dtrace to configure.
1.5.2. Built-in Probes
A number of standard probes are provided in the source code, More probes can certainly be added to enhance IvorySQL’s observability.
Name |
Parameters |
Description |
|
|
Probe that fires at the start of a new transaction. arg0 is the transaction ID. |
|
|
Probe that fires when a transaction completes successfully. arg0 is the transaction ID. |
|
|
Probe that fires when a transaction completes unsuccessfully. arg0 is the transaction ID. |
|
|
Probe that fires when the processing of a query is started. arg0 is the query string. |
|
|
Probe that fires when the processing of a query is complete. arg0 is the query string. |
|
|
Probe that fires when the parsing of a query is started. arg0 is the query string. |
|
|
Probe that fires when the parsing of a query is complete. arg0 is the query string. |
|
|
Probe that fires when the rewriting of a query is started. arg0 is the query string. |
|
|
Probe that fires when the rewriting of a query is complete. arg0 is the query string. |
|
|
Probe that fires when the planning of a query is started. |
|
|
Probe that fires when the planning of a query is complete. |
|
|
Probe that fires when the execution of a query is started. |
|
|
Probe that fires when the execution of a query is complete. |
|
|
Probe that fires anytime the server process updates its |
|
|
Probe that fires when a checkpoint is started. arg0 holds the bitwise flags used to distinguish different checkpoint types, such as shutdown, immediate or force. |
|
|
Probe that fires when a checkpoint is complete. (The probes listed next fire in sequence during checkpoint processing.) arg0 is the number of buffers written. arg1 is the total number of buffers. arg2, arg3 and arg4 contain the number of WAL files added, removed and recycled respectively. |
|
|
Probe that fires when the CLOG portion of a checkpoint is started. arg0 is true for normal checkpoint, false for shutdown checkpoint. |
|
|
Probe that fires when the CLOG portion of a checkpoint is complete. arg0 has the same meaning as for |
|
|
Probe that fires when the SUBTRANS portion of a checkpoint is started. arg0 is true for normal checkpoint, false for shutdown checkpoint. |
|
|
Probe that fires when the SUBTRANS portion of a checkpoint is complete. arg0 has the same meaning as for |
|
|
Probe that fires when the MultiXact portion of a checkpoint is started. arg0 is true for normal checkpoint, false for shutdown checkpoint. |
|
|
Probe that fires when the MultiXact portion of a checkpoint is complete. arg0 has the same meaning as for |
|
|
Probe that fires when the buffer-writing portion of a checkpoint is started. arg0 holds the bitwise flags used to distinguish different checkpoint types, such as shutdown, immediate or force. |
|
|
Probe that fires when we begin to write dirty buffers during checkpoint (after identifying which buffers must be written). arg0 is the total number of buffers. arg1 is the number that are currently dirty and need to be written. |
|
|
Probe that fires after each buffer is written during checkpoint. arg0 is the ID number of the buffer. |
|
|
Probe that fires when all dirty buffers have been written. arg0 is the total number of buffers. arg1 is the number of buffers actually written by the checkpoint process. arg2 is the number that were expected to be written (arg1 of |
|
|
Probe that fires after dirty buffers have been written to the kernel, and before starting to issue fsync requests. |
|
|
Probe that fires when syncing of buffers to disk is complete. |
|
|
Probe that fires when the two-phase portion of a checkpoint is started. |
|
|
Probe that fires when the two-phase portion of a checkpoint is complete. |
|
|
Probe that fires when a buffer read is started. arg0 and arg1 contain the fork and block numbers of the page (but arg1 will be -1 if this is a relation extension request). arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation. arg5 is the ID of the backend which created the temporary relation for a local buffer, or |
|
|
Probe that fires when a buffer read is complete. arg0 and arg1 contain the fork and block numbers of the page (if this is a relation extension request, arg1 now contains the block number of the newly added block). arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation. arg5 is the ID of the backend which created the temporary relation for a local buffer, or |
|
|
Probe that fires before issuing any write request for a shared buffer. arg0 and arg1 contain the fork and block numbers of the page. arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation. |
|
|
Probe that fires when a write request is complete. (Note that this just reflects the time to pass the data to the kernel; it’s typically not actually been written to disk yet.) The arguments are the same as for |
|
|
Probe that fires when a server process begins to write a dirty buffer. (If this happens often, it implies that shared_buffers is too small or the background writer control parameters need adjustment.) arg0 and arg1 contain the fork and block numbers of the page. arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation. |
|
|
Probe that fires when a dirty-buffer write is complete. The arguments are the same as for |
|
|
Probe that fires when a server process begins to write a dirty WAL buffer because no more WAL buffer space is available. (If this happens often, it implies that wal_buffers is too small.) |
|
|
Probe that fires when a dirty WAL buffer write is complete. |
|
|
Probe that fires when a WAL record is inserted. arg0 is the resource manager (rmid) for the record. arg1 contains the info flags. |
|
|
Probe that fires when a WAL segment switch is requested. |
|
|
Probe that fires when beginning to read a block from a relation. arg0 and arg1 contain the fork and block numbers of the page. arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation. arg5 is the ID of the backend which created the temporary relation for a local buffer, or |
|
|
Probe that fires when a block read is complete. arg0 and arg1 contain the fork and block numbers of the page. arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation. arg5 is the ID of the backend which created the temporary relation for a local buffer, or |
|
|
Probe that fires when beginning to write a block to a relation. arg0 and arg1 contain the fork and block numbers of the page. arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation. arg5 is the ID of the backend which created the temporary relation for a local buffer, or |
|
|
Probe that fires when a block write is complete. arg0 and arg1 contain the fork and block numbers of the page. arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation. arg5 is the ID of the backend which created the temporary relation for a local buffer, or |
|
|
Probe that fires when a sort operation is started. arg0 indicates heap, index or datum sort. arg1 is true for unique-value enforcement. arg2 is the number of key columns. arg3 is the number of kilobytes of work memory allowed. arg4 is true if random access to the sort result is required. arg5 indicates serial when |
|
|
Probe that fires when a sort is complete. arg0 is true for external sort, false for internal sort. arg1 is the number of disk blocks used for an external sort, or kilobytes of memory used for an internal sort. |
|
|
Probe that fires when an LWLock has been acquired. arg0 is the LWLock’s tranche. arg1 is the requested lock mode, either exclusive or shared. |
|
|
Probe that fires when an LWLock has been released (but note that any released waiters have not yet been awakened). arg0 is the LWLock’s tranche. |
|
|
Probe that fires when an LWLock was not immediately available and a server process has begun to wait for the lock to become available. arg0 is the LWLock’s tranche. arg1 is the requested lock mode, either exclusive or shared. |
|
|
Probe that fires when a server process has been released from its wait for an LWLock (it does not actually have the lock yet). arg0 is the LWLock’s tranche. arg1 is the requested lock mode, either exclusive or shared. |
|
|
Probe that fires when an LWLock was successfully acquired when the caller specified no waiting. arg0 is the LWLock’s tranche. arg1 is the requested lock mode, either exclusive or shared. |
|
|
Probe that fires when an LWLock was not successfully acquired when the caller specified no waiting. arg0 is the LWLock’s tranche. arg1 is the requested lock mode, either exclusive or shared. |
|
|
Probe that fires when a request for a heavyweight lock (lmgr lock) has begun to wait because the lock is not available. arg0 through arg3 are the tag fields identifying the object being locked. arg4 indicates the type of object being locked. arg5 indicates the lock type being requested. |
|
|
Probe that fires when a request for a heavyweight lock (lmgr lock) has finished waiting (i.e., has acquired the lock). The arguments are the same as for |
|
|
Probe that fires when a deadlock is found by the deadlock detector. |
Type |
Definition |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1.5.3. Using Probes
The example below shows a DTrace script for analyzing transaction counts in the system, as an alternative to snapshotting pg_stat_database before and after a performance test:
#!/usr/sbin/dtrace -qs
postgresql$1:::transaction-start
{
@start["Start"] = count();
self->ts = timestamp;
}
postgresql$1:::transaction-abort
{
@abort["Abort"] = count();
}
postgresql$1:::transaction-commit
/self->ts/
{
@commit["Commit"] = count();
@time["Total time (ns)"] = sum(timestamp - self->ts);
self->ts=0;
}
When executed, the example D script gives output such as:
# ./txn_count.d `pgrep -n postgres` or ./txn_count.d <PID>
^C
Start 71
Commit 70
Total time (ns) 2312105013
1.5.4. Defining New Probes
New probes can be defined within the code wherever the developer desires, though this will require a recompilation. Below are the steps for inserting new probes:
-
Decide on probe names and data to be made available through the probes
-
Add the probe definitions to
src/backend/utils/probes.d -
Include
pg_trace.hif it is not already present in the module(s) containing the probe points, and insertTRACE_POSTGRESQLprobe macros at the desired locations in the source code -
Recompile and verify that the new probes are available
Example: Here is an example of how you would add a probe to trace all new transactions by transaction ID.
-
Decide that the probe will be named
transaction-startand requires a parameter of typeLocalTransactionId -
Add the probe definition to
src/backend/utils/probes.d:``` probe transaction__start(LocalTransactionId); ```
Note the use of the double underline in the probe name. In a DTrace script using the probe, the double underline needs to be replaced with a hyphen, so `transaction-start` is the name to document for users.
-
At compile time,
transaction__startis converted to a macro calledTRACE_POSTGRESQL_TRANSACTION_START(notice the underscores are single here), which is available by includingpg_trace.h. Add the macro call to the appropriate location in the source code. In this case, it looks like the following:``` TRACE_POSTGRESQL_TRANSACTION_START(vxid.localTransactionId); ```
-
After recompiling and running the new binary, check that your newly added probe is available by executing the following DTrace command. You should see similar output:
``` # dtrace -ln transaction-start ID PROVIDER MODULE FUNCTION NAME 18705 postgresql49878 postgres StartTransactionCommand transaction-start 18755 postgresql49877 postgres StartTransactionCommand transaction-start 18805 postgresql49876 postgres StartTransactionCommand transaction-start 18855 postgresql49875 postgres StartTransactionCommand transaction-start 18986 postgresql49873 postgres StartTransactionCommand transaction-start ```
There are a few things to be careful about when adding trace macros to the C code:
-
You should take care that the data types specified for a probe’s parameters match the data types of the variables used in the macro. Otherwise, you will get compilation errors.
-
On most platforms, if IvorySQL is built with
--enable-dtrace, the arguments to a trace macro will be evaluated whenever control passes through the macro, even if no tracing is being done. This is usually not worth worrying about if you are just reporting the values of a few local variables. But beware of putting expensive function calls into the arguments. If you need to do that, consider protecting the macro with a check to see if the trace is actually enabled:``` if (TRACE_POSTGRESQL_TRANSACTION_START_ENABLED()) TRACE_POSTGRESQL_TRANSACTION_START(some_function(...)); ```
Each trace macro has a corresponding ENABLED macro.
2. Monitoring Disk Usage
2.1. Determining Disk Usage
Each table has a primary heap disk file where most of the data is stored. If the table has any columns with potentially-wide values, there also might be a TOAST file associated with the table, which is used to store values too wide to fit comfortably in the main table . There will be one valid index on the TOAST table, if present. There also might be indexes associated with the base table. Each table and index is stored in a separate disk file — possibly more than one file, if the file would exceed one gigabyte.
You can monitor disk space in three ways: using the SQL functions, using the oid2name module, or using manual inspection of the system catalogs. The SQL functions are the easiest to use and are generally recommended. The remainder of this section shows how to do it by inspection of the system catalogs.
Using psql on a recently vacuumed or analyzed database, you can issue queries to see the disk usage of any table:
SELECT pg_relation_filepath(oid), relpages FROM pg_class WHERE relname = 'customer';
pg_relation_filepath | relpages
----------------------+----------
base/16384/16806 | 60
(1 row)
Each page is typically 8 kilobytes. (Remember, relpages is only updated by VACUUM, ANALYZE, and a few DDL commands such as CREATE INDEX.) The file path name is of interest if you want to examine the table’s disk file directly.
To show the space used by TOAST tables, use a query like the following:
SELECT relname, relpages
FROM pg_class,
(SELECT reltoastrelid
FROM pg_class
WHERE relname = 'customer') AS ss
WHERE oid = ss.reltoastrelid OR
oid = (SELECT indexrelid
FROM pg_index
WHERE indrelid = ss.reltoastrelid)
ORDER BY relname;
relname | relpages
----------------------+----------
pg_toast_16806 | 0
pg_toast_16806_index | 1
You can easily display index sizes, too:
SELECT c2.relname, c2.relpages
FROM pg_class c, pg_class c2, pg_index i
WHERE c.relname = 'customer' AND
c.oid = i.indrelid AND
c2.oid = i.indexrelid
ORDER BY c2.relname;
relname | relpages
-------------------+----------
customer_id_index | 26
It is easy to find your largest tables and indexes using this information:
SELECT relname, relpages
FROM pg_class
ORDER BY relpages DESC;
relname | relpages
----------------------+----------
bigtable | 3290
customer | 3144
2.2. Disk Full Failure
The most important disk monitoring task of a database administrator is to make sure the disk doesn’t become full. A filled data disk will not result in data corruption, but it might prevent useful activity from occurring. If the disk holding the WAL files grows full, database server panic and consequent shutdown might occur.
If you cannot free up additional space on the disk by deleting other things, you can move some of the database files to other file systems by making use of tablespaces.
If your system supports per-user disk quotas, then the database will naturally be subject to whatever quota is placed on the user the server runs as. Exceeding the quota will have the same bad effects as running out of disk space entirely.