Monitoring

1. Monitoring Database Activity

1.1. Standard Unix Tools

On most Unix platforms, IvorySQL modifies its command title as reported by ps, so that individual server processes can readily be identified. A sample display is

$ ps auxww | grep ^postgres
postgres  15551  0.0  0.1  57536  7132 pts/0    S    18:02   0:00 postgres -i
postgres  15554  0.0  0.0  57536  1184 ?        Ss   18:02   0:00 postgres: background writer
postgres  15555  0.0  0.0  57536   916 ?        Ss   18:02   0:00 postgres: checkpointer
postgres  15556  0.0  0.0  57536   916 ?        Ss   18:02   0:00 postgres: walwriter
postgres  15557  0.0  0.0  58504  2244 ?        Ss   18:02   0:00 postgres: autovacuum launcher
postgres  15558  0.0  0.0  17512  1068 ?        Ss   18:02   0:00 postgres: stats collector
postgres  15582  0.0  0.0  58772  3080 ?        Ss   18:04   0:00 postgres: joe runbug 127.0.0.1 idle
postgres  15606  0.0  0.0  58772  3052 ?        Ss   18:07   0:00 postgres: tgl regression [local] SELECT waiting
postgres  15610  0.0  0.0  58772  3056 ?        Ss   18:07   0:00 postgres: tgl regression [local] idle in transaction

(The appropriate invocation of ps varies across different platforms, as do the details of what is shown. This example is from a recent Linux system.) The first process listed here is the primary server process. The command arguments shown for it are the same ones used when it was launched. The next four processes are background worker processes automatically launched by the primary process. (The “autovacuum launcher” process will not be present if you have set the system not to run autovacuum.) Each of the remaining processes is a server process handling one client connection. Each such process sets its command line display in the form

postgres: user database host activity

The user, database, and (client) host items remain the same for the life of the client connection, but the activity indicator changes. The activity can be idle (i.e., waiting for a client command), idle in transaction (waiting for client inside a BEGIN block), or a command type name such as SELECT. Also, waiting is appended if the server process is presently waiting on a lock held by another session. In the above example we can infer that process 15606 is waiting for process 15610 to complete its transaction and thereby release some lock. (Process 15610 must be the blocker, because there is no other active session. In more complicated cases it would be necessary to look into the pg_locks system view to determine who is blocking whom.)

If cluster_name has been configured the cluster name will also be shown in ps output:

$ psql -c 'SHOW cluster_name'
 cluster_name
--------------
 server1
(1 row)

$ ps aux|grep server1
postgres   27093  0.0  0.0  30096  2752 ?        Ss   11:34   0:00 postgres: server1: background writer
...

If you have turned off update_process_title then the activity indicator is not updated; the process title is set only once when a new process is launched. On some platforms this saves a measurable amount of per-command overhead; on others it’s insignificant.

Tip

Solaris requires special handling. You must use /usr/ucb/ps, rather than /bin/ps. You also must use two w flags, not just one. In addition, your original invocation of the postgres command must have a shorter ps status display than that provided by each server process. If you fail to do all three things, the ps output for each server process will be the original postgres command line.

1.2. The Cumulative Statistics System

IvorySQL’s cumulative statistics system supports collection and reporting of information about server activity. Presently, accesses to tables and indexes in both disk-block and individual-row terms are counted. The total number of rows in each table, and information about vacuum and analyze actions for each table are also counted. If enabled, calls to user-defined functions and the total time spent in each one are counted as well.

IvorySQL also supports reporting dynamic information about exactly what is going on in the system right now, such as the exact command currently being executed by other server processes, and which other connections exist in the system. This facility is independent of the cumulative statistics system.

1.2.1. Statistics Collection Configuration

Since collection of statistics adds some overhead to query execution, the system can be configured to collect or not collect information. This is controlled by configuration parameters that are normally set in postgresql.conf.

The parameter track_activities enables monitoring of the current command being executed by any server process.

The parameter track_counts controls whether cumulative statistics are collected about table and index accesses.

The parameter track_functions enables tracking of usage of user-defined functions.

The parameter track_io_timing enables monitoring of block read and write times.

The parameter track_wal_io_timing enables monitoring of WAL write times.

Normally these parameters are set in postgresql.conf so that they apply to all server processes, but it is possible to turn them on or off in individual sessions using the SET command. (To prevent ordinary users from hiding their activity from the administrator, only superusers are allowed to change these parameters with SET.)

Cumulative statistics are collected in shared memory. Every IvorySQL process collects statistics locally, then updates the shared data at appropriate intervals. When a server, including a physical replica, shuts down cleanly, a permanent copy of the statistics data is stored in the pg_stat subdirectory, so that statistics can be retained across server restarts. In contrast, when starting from an unclean shutdown (e.g., after an immediate shutdown, a server crash, starting from a base backup, and point-in-time recovery), all statistics counters are reset.

1.2.2. Viewing Statistics

Several predefined views, listed in Table 1 , are available to show the current state of the system. There are also several other views, listed in Table 2 , available to show the accumulated statistics. Alternatively, one can build custom views using the underlying cumulative statistics functions.

When using the cumulative statistics views and functions to monitor collected data, it is important to realize that the information does not update instantaneously. Each individual server process flushes out accumulated statistics to shared memory just before going idle, but not more frequently than once per PGSTAT_MIN_INTERVAL milliseconds (1 second unless altered while building the server); so a query or transaction still in progress does not affect the displayed totals and the displayed information lags behind actual activity. However, current-query information collected by track_activities is always up-to-date.

Another important point is that when a server process is asked to display any of the accumulated statistics, accessed values are cached until the end of its current transaction in the default configuration. So the statistics will show static information as long as you continue the current transaction. Similarly, information about the current queries of all sessions is collected when any such information is first requested within a transaction, and the same information will be displayed throughout the transaction. This is a feature, not a bug, because it allows you to perform several queries on the statistics and correlate the results without worrying that the numbers are changing underneath you. When analyzing statistics interactively, or with expensive queries, the time delta between accesses to individual statistics can lead to significant skew in the cached statistics. To minimize skew, stats_fetch_consistency can be set to snapshot, at the price of increased memory usage for caching not-needed statistics data. Conversely, if it’s known that statistics are only accessed once, caching accessed statistics is unnecessary and can be avoided by setting stats_fetch_consistency to none. You can invoke pg_stat_clear_snapshot() to discard the current transaction’s statistics snapshot or cached values (if any). The next use of statistical information will (when in snapshot mode) cause a new snapshot to be built or (when in cache mode) accessed statistics to be cached.

A transaction can also see its own statistics (not yet flushed out to the shared memory statistics) in the views pg_stat_xact_all_tables, pg_stat_xact_sys_tables, pg_stat_xact_user_tables, and pg_stat_xact_user_functions. These numbers do not act as stated above; instead they update continuously throughout the transaction.

Some of the information in the dynamic statistics views shown in Table 1 is security restricted. Ordinary users can only see all the information about their own sessions (sessions belonging to a role that they are a member of). In rows about other sessions, many columns will be null. Note, however, that the existence of a session and its general properties such as its sessions user and database are visible to all users. Superusers and roles with privileges of built-in role pg_read_all_stats can see all the information about all sessions.

Dynamic Statics Views

View Name

Description

pg_stat_activity

One row per server process, showing information related to the current activity of that process, such as state and current query.

pg_stat_replication

One row per WAL sender process, showing statistics about replication to that sender’s connected standby server.

pg_stat_wal_receiver

Only one row, showing statistics about the WAL receiver from that receiver’s connected server.

pg_stat_recovery_prefetch

Only one row, showing statistics about blocks prefetched during recovery.

pg_stat_subscription

At least one row per subscription, showing information about the subscription workers.

pg_stat_ssl

One row per connection (regular and replication), showing information about SSL used on this connection.

pg_stat_gssapi

One row per connection (regular and replication), showing information about GSSAPI authentication and encryption used on this connection.

pg_stat_progress_analyze

One row for each backend (including autovacuum worker processes) running ANALYZE, showing current progress.

pg_stat_progress_create_index

One row for each backend running CREATE INDEX or REINDEX, showing current progress.

pg_stat_progress_vacuum

One row for each backend (including autovacuum worker processes) running VACUUM, showing current progress.

pg_stat_progress_cluster

One row for each backend running CLUSTER or VACUUM FULL, showing current progress.

pg_stat_progress_basebackup

One row for each WAL sender process streaming a base backup, showing current progress.

pg_stat_progress_copy

One row for each backend running COPY, showing current progress.

Collected Statistics Views

View Name

Description

pg_stat_archiver

One row only, showing statistics about the WAL archiver process’s activity. See pg_stat_archiver for details.

pg_stat_bgwriter

One row only, showing statistics about the background writer process’s activity. See pg_stat_bgwriter for details.

pg_stat_wal

One row only, showing statistics about WAL activity. See pg_stat_wal for details.

pg_stat_database

One row per database, showing database-wide statistics. See pg_stat_database for details.

pg_stat_database_conflicts

One row per database, showing database-wide statistics about query cancels due to conflict with recovery on standby servers. See pg_stat_database_conflicts for details.

pg_stat_all_tables

One row for each table in the current database, showing statistics about accesses to that specific table. See pg_stat_all_tables for details.

pg_stat_sys_tables

Same as pg_stat_all_tables, except that only system tables are shown.

pg_stat_user_tables

Same as pg_stat_all_tables, except that only user tables are shown.

pg_stat_xact_all_tables

Similar to pg_stat_all_tables, but counts actions taken so far within the current transaction (which are not yet included in pg_stat_all_tables and related views). The columns for numbers of live and dead rows and vacuum and analyze actions are not present in this view.

pg_stat_xact_sys_tables

Same as pg_stat_xact_all_tables, except that only system tables are shown.

pg_stat_xact_user_tables

Same as pg_stat_xact_all_tables, except that only user tables are shown.

pg_stat_all_indexes

One row for each index in the current database, showing statistics about accesses to that specific index. See pg_stat_all_indexes for details.

pg_stat_sys_indexes

Same as pg_stat_all_indexes, except that only indexes on system tables are shown.

pg_stat_user_indexes

Same as pg_stat_all_indexes, except that only indexes on user tables are shown.

pg_statio_all_tables

One row for each table in the current database, showing statistics about I/O on that specific table. See pg_statio_all_tables for details.

pg_statio_sys_tables

Same as pg_statio_all_tables, except that only system tables are shown.

pg_statio_user_tables

Same as pg_statio_all_tables, except that only user tables are shown.

pg_statio_all_indexes

One row for each index in the current database, showing statistics about I/O on that specific index. See pg_statio_all_indexes for details.

pg_statio_sys_indexes

Same as pg_statio_all_indexes, except that only indexes on system tables are shown.

pg_statio_user_indexes

Same as pg_statio_all_indexes, except that only indexes on user tables are shown.

pg_statio_all_sequences

One row for each sequence in the current database, showing statistics about I/O on that specific sequence. See pg_statio_all_sequences for details.

pg_statio_sys_sequences

Same as pg_statio_all_sequences, except that only system sequences are shown. (Presently, no system sequences are defined, so this view is always empty.)

pg_statio_user_sequences

Same as pg_statio_all_sequences, except that only user sequences are shown.

pg_stat_user_functions

One row for each tracked function, showing statistics about executions of that function. See pg_stat_user_functions for details.

pg_stat_xact_user_functions

Similar to pg_stat_user_functions, but counts only calls during the current transaction (which are not yet included in pg_stat_user_functions).

pg_stat_slru

One row per SLRU, showing statistics of operations. See pg_stat_slru for details.

pg_stat_replication_slots

One row per replication slot, showing statistics about the replication slot’s usage. See pg_stat_replication_slots for details.

pg_stat_subscription_stats

One row per subscription, showing statistics about errors. See pg_stat_subscription_stats for details.

The per-index statistics are particularly useful to determine which indexes are being used and how effective they are.

The pg_statio_ views are primarily useful to determine the effectiveness of the buffer cache. When the number of actual disk reads is much smaller than the number of buffer hits, then the cache is satisfying most read requests without invoking a kernel call. However, these statistics do not give the entire story: due to the way in which IvorySQL handles disk I/O, data that is not in the IvorySQL buffer cache might still reside in the kernel’s I/O cache, and might therefore still be fetched without requiring a physical read. Users interested in obtaining more detailed information on IvorySQL I/O behavior are advised to use the IvorySQL statistics views in combination with operating system utilities that allow insight into the kernel’s handling of I/O.

1.2.3. pg_stat_activity

The pg_stat_activity view will have one row per server process, showing information related to the current activity of that process.

pg_stat_activity View

Column TypeDescription

datid `oid`OID of the database this backend is connected to

datname `name`Name of the database this backend is connected to

pid `integer`Process ID of this backend

leader_pid integer`Process ID of the parallel group leader, if this process is a parallel query worker. `NULL if this process is a parallel group leader or does not participate in parallel query.

usesysid `oid`OID of the user logged into this backend

usename `name`Name of the user logged into this backend

application_name `text`Name of the application that is connected to this backend

client_addr `inet`IP address of the client connected to this backend. If this field is null, it indicates either that the client is connected via a Unix socket on the server machine or that this is an internal process such as autovacuum.

client_hostname text`Host name of the connected client, as reported by a reverse DNS lookup of `client_addr. This field will only be non-null for IP connections, and only when log_hostname is enabled.

client_port integer`TCP port number that the client is using for communication with this backend, or `-1 if a Unix socket is used. If this field is null, it indicates that this is an internal server process.

backend_start `timestamp with time zone`Time when this process was started. For client backends, this is the time the client connected to the server.

xact_start timestamp with time zone`Time when this process' current transaction was started, or null if no transaction is active. If the current query is the first of its transaction, this column is equal to the `query_start column.

query_start timestamp with time zone`Time when the currently active query was started, or if `state is not active, when the last query was started

state_change timestamp with time zone`Time when the `state was last changed

wait_event_type `text`The type of event for which the backend is waiting, if any; otherwise NULL.

wait_event `text`Wait event name if backend is currently waiting, otherwise NULL.

state text`Current overall state of this backend. Possible values are:`active: The backend is executing a query.idle: The backend is waiting for a new client command.idle in transaction: The backend is in a transaction, but is not currently executing a query.idle in transaction (aborted): This state is similar to idle in transaction, except one of the statements in the transaction caused an error.fastpath function call: The backend is executing a fast-path function.disabled: This state is reported if track_activities is disabled in this backend.

backend_xid `xid`Top-level transaction identifier of this backend, if any.

backend_xmin xid`The current backend’s `xmin horizon.

query_id bigint`Identifier of this backend’s most recent query. If `state is active this field shows the identifier of the currently executing query. In all other states, it shows the identifier of last query that was executed. Query identifiers are not computed by default so this field will be null unless compute_query_id parameter is enabled or a third-party module that computes query identifiers is configured.

query text`Text of this backend’s most recent query. If `state is active this field shows the currently executing query. In all other states, it shows the last query that was executed. By default the query text is truncated at 1024 bytes; this value can be changed via the parameter track_activity_query_size.

backend_type text`Type of current backend. Possible types are `autovacuum launcher, autovacuum worker, logical replication launcher, logical replication worker, parallel worker, background writer, client backend, checkpointer, archiver, startup, walreceiver, walsender and walwriter. In addition, background workers registered by extensions may have additional types.

Note

The wait_event and state columns are independent. If a backend is in the active state, it may or may not be waiting on some event. If the state is active and wait_event is non-null, it means that a query is being executed, but is being blocked somewhere in the system.

Wait Event Types

Wait Event Type

Description

Activity

The server process is idle. This event type indicates a process waiting for activity in its main processing loop. wait_event will identify the specific wait point

BufferPin

The server process is waiting for exclusive access to a data buffer. Buffer pin waits can be protracted if another process holds an open cursor that last read data from the buffer in question.

Client

The server process is waiting for activity on a socket connected to a user application. Thus, the server expects something to happen that is independent of its internal processes. wait_event will identify the specific wait point.

Extension

The server process is waiting for some condition defined by an extension module.

IO

The server process is waiting for an I/O operation to complete. wait_event will identify the specific wait point.

IPC

The server process is waiting for some interaction with another server process. wait_event will identify the specific wait point.

Lock

The server process is waiting for a heavyweight lock. Heavyweight locks, also known as lock manager locks or simply locks, primarily protect SQL-visible objects such as tables. However, they are also used to ensure mutual exclusion for certain internal operations such as relation extension. wait_event will identify the type of lock awaited.

LWLock

The server process is waiting for a lightweight lock. Most such locks protect a particular data structure in shared memory. wait_event will contain a name identifying the purpose of the lightweight lock. (Some locks have specific names; others are part of a group of locks each with a similar purpose.) .

Timeout

The server process is waiting for a timeout to expire. wait_event will identify the specific wait point.

Wait Events of Type Activity

Activity Wait Event

Description

ArchiverMain

Waiting in main loop of archiver process.

AutoVacuumMain

Waiting in main loop of autovacuum launcher process.

BgWriterHibernate

Waiting in background writer process, hibernating.

BgWriterMain

Waiting in main loop of background writer process.

CheckpointerMain

Waiting in main loop of checkpointer process.

LogicalApplyMain

Waiting in main loop of logical replication apply process.

LogicalLauncherMain

Waiting in main loop of logical replication launcher process.

RecoveryWalStream

Waiting in main loop of startup process for WAL to arrive, during streaming recovery.

SysLoggerMain

Waiting in main loop of syslogger process.

WalReceiverMain

Waiting in main loop of WAL receiver process.

WalSenderMain

Waiting in main loop of WAL sender process.

WalWriterMain

Waiting in main loop of WAL writer process.

Wait Events of Type BufferPin

BufferPin Wait Event

Description

BufferPin

Waiting to acquire an exclusive pin on a buffer.

Wait Events of Type Client

Client Wait Event

Description

ClientRead

Waiting to read data from the client.

ClientWrite

Waiting to write data to the client.

GSSOpenServer

Waiting to read data from the client while establishing a GSSAPI session.

LibPQWalReceiverConnect

Waiting in WAL receiver to establish connection to remote server.

LibPQWalReceiverReceive

Waiting in WAL receiver to receive data from remote server.

SSLOpenServer

Waiting for SSL while attempting connection.

WalSenderWaitForWAL

Waiting for WAL to be flushed in WAL sender process.

WalSenderWriteData

Waiting for any activity when processing replies from WAL receiver in WAL sender process.

Wait Events of Type Extension

Extension Wait Event

Description

Extension

Waiting in an extension.

Wait Events of Type IO

IO Wait Event

Description

BaseBackupRead

Waiting for base backup to read from a file.

BufFileRead

Waiting for a read from a buffered file.

BufFileWrite

Waiting for a write to a buffered file.

BufFileTruncate

Waiting for a buffered file to be truncated.

ControlFileRead

Waiting for a read from the pg_control file.

ControlFileSync

Waiting for the pg_control file to reach durable storage.

ControlFileSyncUpdate

Waiting for an update to the pg_control file to reach durable storage.

ControlFileWrite

Waiting for a write to the pg_control file.

ControlFileWriteUpdate

Waiting for a write to update the pg_control file.

CopyFileRead

Waiting for a read during a file copy operation.

CopyFileWrite

Waiting for a write during a file copy operation.

DSMFillZeroWrite

Waiting to fill a dynamic shared memory backing file with zeroes.

DataFileExtend

Waiting for a relation data file to be extended.

DataFileFlush

Waiting for a relation data file to reach durable storage.

DataFileImmediateSync

Waiting for an immediate synchronization of a relation data file to durable storage.

DataFilePrefetch

Waiting for an asynchronous prefetch from a relation data file.

DataFileRead

Waiting for a read from a relation data file.

DataFileSync

Waiting for changes to a relation data file to reach durable storage.

DataFileTruncate

Waiting for a relation data file to be truncated.

DataFileWrite

Waiting for a write to a relation data file.

LockFileAddToDataDirRead

Waiting for a read while adding a line to the data directory lock file.

LockFileAddToDataDirSync

Waiting for data to reach durable storage while adding a line to the data directory lock file.

LockFileAddToDataDirWrite

Waiting for a write while adding a line to the data directory lock file.

LockFileCreateRead

Waiting to read while creating the data directory lock file.

LockFileCreateSync

Waiting for data to reach durable storage while creating the data directory lock file.

LockFileCreateWrite

Waiting for a write while creating the data directory lock file.

LockFileReCheckDataDirRead

Waiting for a read during recheck of the data directory lock file.

LogicalRewriteCheckpointSync

Waiting for logical rewrite mappings to reach durable storage during a checkpoint.

LogicalRewriteMappingSync

Waiting for mapping data to reach durable storage during a logical rewrite.

LogicalRewriteMappingWrite

Waiting for a write of mapping data during a logical rewrite.

LogicalRewriteSync

Waiting for logical rewrite mappings to reach durable storage.

LogicalRewriteTruncate

Waiting for truncate of mapping data during a logical rewrite.

LogicalRewriteWrite

Waiting for a write of logical rewrite mappings.

RelationMapRead

Waiting for a read of the relation map file.

RelationMapSync

Waiting for the relation map file to reach durable storage.

RelationMapWrite

Waiting for a write to the relation map file.

ReorderBufferRead

Waiting for a read during reorder buffer management.

ReorderBufferWrite

Waiting for a write during reorder buffer management.

ReorderLogicalMappingRead

Waiting for a read of a logical mapping during reorder buffer management.

ReplicationSlotRead

Waiting for a read from a replication slot control file.

ReplicationSlotRestoreSync

Waiting for a replication slot control file to reach durable storage while restoring it to memory.

ReplicationSlotSync

Waiting for a replication slot control file to reach durable storage.

ReplicationSlotWrite

Waiting for a write to a replication slot control file.

SLRUFlushSync

Waiting for SLRU data to reach durable storage during a checkpoint or database shutdown.

SLRURead

Waiting for a read of an SLRU page.

SLRUSync

Waiting for SLRU data to reach durable storage following a page write.

SLRUWrite

Waiting for a write of an SLRU page.

SnapbuildRead

Waiting for a read of a serialized historical catalog snapshot.

SnapbuildSync

Waiting for a serialized historical catalog snapshot to reach durable storage.

SnapbuildWrite

Waiting for a write of a serialized historical catalog snapshot.

TimelineHistoryFileSync

Waiting for a timeline history file received via streaming replication to reach durable storage.

TimelineHistoryFileWrite

Waiting for a write of a timeline history file received via streaming replication.

TimelineHistoryRead

Waiting for a read of a timeline history file.

TimelineHistorySync

Waiting for a newly created timeline history file to reach durable storage.

TimelineHistoryWrite

Waiting for a write of a newly created timeline history file.

TwophaseFileRead

Waiting for a read of a two phase state file.

TwophaseFileSync

Waiting for a two phase state file to reach durable storage.

TwophaseFileWrite

Waiting for a write of a two phase state file.

VersionFileWrite

Waiting for the version file to be written while creating a database.

WALBootstrapSync

Waiting for WAL to reach durable storage during bootstrapping.

WALBootstrapWrite

Waiting for a write of a WAL page during bootstrapping.

WALCopyRead

Waiting for a read when creating a new WAL segment by copying an existing one.

WALCopySync

Waiting for a new WAL segment created by copying an existing one to reach durable storage.

WALCopyWrite

Waiting for a write when creating a new WAL segment by copying an existing one.

WALInitSync

Waiting for a newly initialized WAL file to reach durable storage.

WALInitWrite

Waiting for a write while initializing a new WAL file.

WALRead

Waiting for a read from a WAL file.

WALSenderTimelineHistoryRead

Waiting for a read from a timeline history file during a walsender timeline command.

WALSync

Waiting for a WAL file to reach durable storage.

WALSyncMethodAssign

Waiting for data to reach durable storage while assigning a new WAL sync method.

WALWrite

Waiting for a write to a WAL file.

Wait Events of Type IPC

IPC Wait Event

Description

AppendReady

Waiting for subplan nodes of an Append plan node to be ready.

ArchiveCleanupCommand

Waiting for archive_cleanup_command to complete.

ArchiveCommand

Waiting for archive_command to complete.

BackendTermination

Waiting for the termination of another backend.

BackupWaitWalArchive

Waiting for WAL files required for a backup to be successfully archived.

BgWorkerShutdown

Waiting for background worker to shut down.

BgWorkerStartup

Waiting for background worker to start up.

BtreePage

Waiting for the page number needed to continue a parallel B-tree scan to become available.

BufferIO

Waiting for buffer I/O to complete.

CheckpointDone

Waiting for a checkpoint to complete.

CheckpointStart

Waiting for a checkpoint to start.

ExecuteGather

Waiting for activity from a child process while executing a Gather plan node.

HashBatchAllocate

Waiting for an elected Parallel Hash participant to allocate a hash table.

HashBatchElect

Waiting to elect a Parallel Hash participant to allocate a hash table.

HashBatchLoad

Waiting for other Parallel Hash participants to finish loading a hash table.

HashBuildAllocate

Waiting for an elected Parallel Hash participant to allocate the initial hash table.

HashBuildElect

Waiting to elect a Parallel Hash participant to allocate the initial hash table.

HashBuildHashInner

Waiting for other Parallel Hash participants to finish hashing the inner relation.

HashBuildHashOuter

Waiting for other Parallel Hash participants to finish partitioning the outer relation.

HashGrowBatchesAllocate

Waiting for an elected Parallel Hash participant to allocate more batches.

HashGrowBatchesDecide

Waiting to elect a Parallel Hash participant to decide on future batch growth.

HashGrowBatchesElect

Waiting to elect a Parallel Hash participant to allocate more batches.

HashGrowBatchesFinish

Waiting for an elected Parallel Hash participant to decide on future batch growth.

HashGrowBatchesRepartition

Waiting for other Parallel Hash participants to finish repartitioning.

HashGrowBucketsAllocate

Waiting for an elected Parallel Hash participant to finish allocating more buckets.

HashGrowBucketsElect

Waiting to elect a Parallel Hash participant to allocate more buckets.

HashGrowBucketsReinsert

Waiting for other Parallel Hash participants to finish inserting tuples into new buckets.

LogicalSyncData

Waiting for a logical replication remote server to send data for initial table synchronization.

LogicalSyncStateChange

Waiting for a logical replication remote server to change state.

MessageQueueInternal

Waiting for another process to be attached to a shared message queue.

MessageQueuePutMessage

Waiting to write a protocol message to a shared message queue.

MessageQueueReceive

Waiting to receive bytes from a shared message queue.

MessageQueueSend

Waiting to send bytes to a shared message queue.

ParallelBitmapScan

Waiting for parallel bitmap scan to become initialized.

ParallelCreateIndexScan

Waiting for parallel CREATE INDEX workers to finish heap scan.

ParallelFinish

Waiting for parallel workers to finish computing.

ProcArrayGroupUpdate

Waiting for the group leader to clear the transaction ID at end of a parallel operation.

ProcSignalBarrier

Waiting for a barrier event to be processed by all backends.

Promote

Waiting for standby promotion.

RecoveryConflictSnapshot

Waiting for recovery conflict resolution for a vacuum cleanup.

RecoveryConflictTablespace

Waiting for recovery conflict resolution for dropping a tablespace.

RecoveryEndCommand

Waiting for recovery_end_command to complete.

RecoveryPause

Waiting for recovery to be resumed.

ReplicationOriginDrop

Waiting for a replication origin to become inactive so it can be dropped.

ReplicationSlotDrop

Waiting for a replication slot to become inactive so it can be dropped.

RestoreCommand

Waiting for restore_command to complete.

SafeSnapshot

Waiting to obtain a valid snapshot for a READ ONLY DEFERRABLE transaction.

SyncRep

Waiting for confirmation from a remote server during synchronous replication.

WalReceiverExit

Waiting for the WAL receiver to exit.

WalReceiverWaitStart

Waiting for startup process to send initial data for streaming replication.

XactGroupUpdate

Waiting for the group leader to update transaction status at end of a parallel operation.

Wait Events of Type Lock

Lock Wait Event

Description

advisory

Waiting to acquire an advisory user lock.

extend

Waiting to extend a relation.

frozenid

Waiting to update pg_database.datfrozenxid and pg_database.datminmxid.

object

Waiting to acquire a lock on a non-relation database object.

page

Waiting to acquire a lock on a page of a relation.

relation

Waiting to acquire a lock on a relation.

spectoken

Waiting to acquire a speculative insertion lock.

transactionid

Waiting for a transaction to finish.

tuple

Waiting to acquire a lock on a tuple.

userlock

Waiting to acquire a user lock.

virtualxid

Waiting to acquire a virtual transaction ID lock.

Wait Events of Type LWLock

LWLock Wait Event

Description

AddinShmemInit

Waiting to manage an extension’s space allocation in shared memory.

AutoFile

Waiting to update the postgresql.auto.conf file.

Autovacuum

Waiting to read or update the current state of autovacuum workers.

AutovacuumSchedule

Waiting to ensure that a table selected for autovacuum still needs vacuuming.

BackgroundWorker

Waiting to read or update background worker state.

BtreeVacuum

Waiting to read or update vacuum-related information for a B-tree index.

BufferContent

Waiting to access a data page in memory.

BufferMapping

Waiting to associate a data block with a buffer in the buffer pool.

CheckpointerComm

Waiting to manage fsync requests.

CommitTs

Waiting to read or update the last value set for a transaction commit timestamp.

CommitTsBuffer

Waiting for I/O on a commit timestamp SLRU buffer.

CommitTsSLRU

Waiting to access the commit timestamp SLRU cache.

ControlFile

Waiting to read or update the pg_control file or create a new WAL file.

DynamicSharedMemoryControl

Waiting to read or update dynamic shared memory allocation information.

LockFastPath

Waiting to read or update a process' fast-path lock information.

LockManager

Waiting to read or update information about “heavyweight” locks.

LogicalRepWorker

Waiting to read or update the state of logical replication workers.

MultiXactGen

Waiting to read or update shared multixact state.

MultiXactMemberBuffer

Waiting for I/O on a multixact member SLRU buffer.

MultiXactMemberSLRU

Waiting to access the multixact member SLRU cache.

MultiXactOffsetBuffer

Waiting for I/O on a multixact offset SLRU buffer.

MultiXactOffsetSLRU

Waiting to access the multixact offset SLRU cache.

MultiXactTruncation

Waiting to read or truncate multixact information.

NotifyBuffer

Waiting for I/O on a NOTIFY message SLRU buffer.

NotifyQueue

Waiting to read or update NOTIFY messages.

NotifyQueueTail

Waiting to update limit on NOTIFY message storage.

NotifySLRU

Waiting to access the NOTIFY message SLRU cache.

OidGen

Waiting to allocate a new OID.

OldSnapshotTimeMap

Waiting to read or update old snapshot control information.

ParallelAppend

Waiting to choose the next subplan during Parallel Append plan execution.

ParallelHashJoin

Waiting to synchronize workers during Parallel Hash Join plan execution.

ParallelQueryDSA

Waiting for parallel query dynamic shared memory allocation.

PerSessionDSA

Waiting for parallel query dynamic shared memory allocation.

PerSessionRecordType

Waiting to access a parallel query’s information about composite types.

PerSessionRecordTypmod

Waiting to access a parallel query’s information about type modifiers that identify anonymous record types.

PerXactPredicateList

Waiting to access the list of predicate locks held by the current serializable transaction during a parallel query.

PredicateLockManager

Waiting to access predicate lock information used by serializable transactions.

ProcArray

Waiting to access the shared per-process data structures (typically, to get a snapshot or report a session’s transaction ID).

RelationMapping

Waiting to read or update a pg_filenode.map file (used to track the filenode assignments of certain system catalogs).

RelCacheInit

Waiting to read or update a pg_internal.init relation cache initialization file.

ReplicationOrigin

Waiting to create, drop or use a replication origin.

ReplicationOriginState

Waiting to read or update the progress of one replication origin.

ReplicationSlotAllocation

Waiting to allocate or free a replication slot.

ReplicationSlotControl

Waiting to read or update replication slot state.

ReplicationSlotIO

Waiting for I/O on a replication slot.

SerialBuffer

Waiting for I/O on a serializable transaction conflict SLRU buffer.

SerializableFinishedList

Waiting to access the list of finished serializable transactions.

SerializablePredicateList

Waiting to access the list of predicate locks held by serializable transactions.

PgStatsDSA

Waiting for stats dynamic shared memory allocator access

PgStatsHash

Waiting for stats shared memory hash table access

PgStatsData

Waiting for shared memory stats data access

SerializableXactHash

Waiting to read or update information about serializable transactions.

SerialSLRU

Waiting to access the serializable transaction conflict SLRU cache.

SharedTidBitmap

Waiting to access a shared TID bitmap during a parallel bitmap index scan.

SharedTupleStore

Waiting to access a shared tuple store during parallel query.

ShmemIndex

Waiting to find or allocate space in shared memory.

SInvalRead

Waiting to retrieve messages from the shared catalog invalidation queue.

SInvalWrite

Waiting to add a message to the shared catalog invalidation queue.

SubtransBuffer

Waiting for I/O on a sub-transaction SLRU buffer.

SubtransSLRU

Waiting to access the sub-transaction SLRU cache.

SyncRep

Waiting to read or update information about the state of synchronous replication.

SyncScan

Waiting to select the starting location of a synchronized table scan.

TablespaceCreate

Waiting to create or drop a tablespace.

TwoPhaseState

Waiting to read or update the state of prepared transactions.

WALBufMapping

Waiting to replace a page in WAL buffers.

WALInsert

Waiting to insert WAL data into a memory buffer.

WALWrite

Waiting for WAL buffers to be written to disk.

WrapLimitsVacuum

Waiting to update limits on transaction id and multixact consumption.

XactBuffer

Waiting for I/O on a transaction status SLRU buffer.

XactSLRU

Waiting to access the transaction status SLRU cache.

XactTruncation

Waiting to execute pg_xact_status or update the oldest transaction ID available to it.

XidGen

Waiting to allocate a new transaction ID.

Note

Extensions can add LWLock types to the list shown in Table 12. In some cases, the name assigned by an extension will not be available in all server processes; so an LWLock wait event might be reported as just “extension” rather than the extension-assigned name.

Wait Events of Type Timeout

Timeout Wait Event

Description

BaseBackupThrottle

Waiting during base backup when throttling activity.

CheckpointWriteDelay

Waiting between writes while performing a checkpoint.

PgSleep

Waiting due to a call to pg_sleep or a sibling function.

RecoveryApplyDelay

Waiting to apply WAL during recovery because of a delay setting.

RecoveryRetrieveRetryInterval

Waiting during recovery when WAL data is not available from any source (pg_wal, archive or stream).

RegisterSyncRequest

Waiting while sending synchronization requests to the checkpointer, because the request queue is full.

VacuumDelay

Waiting in a cost-based vacuum delay point.

VacuumTruncate

Waiting to acquire an exclusive lock to truncate off any empty pages at the end of a table vacuumed.

Here is an example of how wait events can be viewed:

SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event is NOT NULL;
 pid  | wait_event_type | wait_event
------+-----------------+------------
 2540 | Lock            | relation
 6644 | LWLock          | ProcArray
(2 rows)

1.2.4. pg_stat_replication

The pg_stat_replication view will contain one row per WAL sender process, showing statistics about replication to that sender’s connected standby server. Only directly connected standbys are listed; no information is available about downstream standby servers.

pg_stat_replication View

Column TypeDescription

pid `integer`Process ID of a WAL sender process

usesysid `oid`OID of the user logged into this WAL sender process

usename `name`Name of the user logged into this WAL sender process

application_name `text`Name of the application that is connected to this WAL sender

client_addr `inet`IP address of the client connected to this WAL sender. If this field is null, it indicates that the client is connected via a Unix socket on the server machine.

client_hostname text`Host name of the connected client, as reported by a reverse DNS lookup of `client_addr. This field will only be non-null for IP connections, and only when log_hostname is enabled.

client_port integer`TCP port number that the client is using for communication with this WAL sender, or `-1 if a Unix socket is used

backend_start `timestamp with time zone`Time when this process was started, i.e., when the client connected to this WAL sender

backend_xmin xid`This standby’s `xmin horizon reported by hot_standby_feedback.

state text`Current WAL sender state. Possible values are:`startup: This WAL sender is starting up.catchup: This WAL sender’s connected standby is catching up with the primary.streaming: This WAL sender is streaming changes after its connected standby server has caught up with the primary.backup: This WAL sender is sending a backup.stopping: This WAL sender is stopping.

sent_lsn `pg_lsn`Last write-ahead log location sent on this connection

write_lsn `pg_lsn`Last write-ahead log location written to disk by this standby server

flush_lsn `pg_lsn`Last write-ahead log location flushed to disk by this standby server

replay_lsn `pg_lsn`Last write-ahead log location replayed into the database on this standby server

write_lag interval`Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written it (but not yet flushed it or applied it). This can be used to gauge the delay that `synchronous_commit level remote_write incurred while committing if this server was configured as a synchronous standby.

flush_lag interval`Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written and flushed it (but not yet applied it). This can be used to gauge the delay that `synchronous_commit level on incurred while committing if this server was configured as a synchronous standby.

replay_lag interval`Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written, flushed and applied it. This can be used to gauge the delay that `synchronous_commit level remote_apply incurred while committing if this server was configured as a synchronous standby.

sync_priority `integer`Priority of this standby server for being chosen as the synchronous standby in a priority-based synchronous replication. This has no effect in a quorum-based synchronous replication.

sync_state text`Synchronous state of this standby server. Possible values are:`async: This standby server is asynchronous.potential: This standby server is now asynchronous, but can potentially become synchronous if one of current synchronous ones fails.sync: This standby server is synchronous.quorum: This standby server is considered as a candidate for quorum standbys.

reply_time `timestamp with time zone`Send time of last reply message received from standby server

The lag times reported in the pg_stat_replication view are measurements of the time taken for recent WAL to be written, flushed and replayed and for the sender to know about it. These times represent the commit delay that was (or would have been) introduced by each synchronous commit level, if the remote server was configured as a synchronous standby. For an asynchronous standby, the replay_lag column approximates the delay before recent transactions became visible to queries. If the standby server has entirely caught up with the sending server and there is no more WAL activity, the most recently measured lag times will continue to be displayed for a short time and then show NULL.

Lag times work automatically for physical replication. Logical decoding plugins may optionally emit tracking messages; if they do not, the tracking mechanism will simply display NULL lag.

Note

The reported lag times are not predictions of how long it will take for the standby to catch up with the sending server assuming the current rate of replay. Such a system would show similar times while new WAL is being generated, but would differ when the sender becomes idle. In particular, when the standby has caught up completely, pg_stat_replication shows the time taken to write, flush and replay the most recent reported WAL location rather than zero as some users might expect. This is consistent with the goal of measuring synchronous commit and transaction visibility delays for recent write transactions. To reduce confusion for users expecting a different model of lag, the lag columns revert to NULL after a short time on a fully replayed idle system. Monitoring systems should choose whether to represent this as missing data, zero or continue to display the last known value.

1.2.5. pg_stat_replication_slots

The pg_stat_replication_slots view will contain one row per logical replication slot, showing statistics about its usage.

pg_stat_replication_slots View

Column TypeDescription

slot_name `text`A unique, cluster-wide identifier for the replication slot

spill_txns bigint`Number of transactions spilled to disk once the memory used by logical decoding to decode changes from WAL has exceeded `logical_decoding_work_mem. The counter gets incremented for both top-level transactions and subtransactions.

spill_count `bigint`Number of times transactions were spilled to disk while decoding changes from WAL for this slot. This counter is incremented each time a transaction is spilled, and the same transaction may be spilled multiple times.

spill_bytes bigint`Amount of decoded transaction data spilled to disk while performing decoding of changes from WAL for this slot. This and other spill counters can be used to gauge the I/O which occurred during logical decoding and allow tuning `logical_decoding_work_mem.

stream_txns bigint`Number of in-progress transactions streamed to the decoding output plugin after the memory used by logical decoding to decode changes from WAL for this slot has exceeded `logical_decoding_work_mem. Streaming only works with top-level transactions (subtransactions can’t be streamed independently), so the counter is not incremented for subtransactions.

`stream_count``bigint`Number of times in-progress transactions were streamed to the decoding output plugin while decoding changes from WAL for this slot. This counter is incremented each time a transaction is streamed, and the same transaction may be streamed multiple times.

stream_bytes``bigint`Amount of transaction data decoded for streaming in-progress transactions to the decoding output plugin while decoding changes from WAL for this slot. This and other streaming counters for this slot can be used to tune `logical_decoding_work_mem.

total_txns `bigint`Number of decoded transactions sent to the decoding output plugin for this slot. This counts top-level transactions only, and is not incremented for subtransactions. Note that this includes the transactions that are streamed and/or spilled.

`total_bytes``bigint`Amount of transaction data decoded for sending transactions to the decoding output plugin while decoding changes from WAL for this slot. Note that this includes data that is streamed and/or spilled.

stats_reset `timestamp with time zone`Time at which these statistics were last reset

1.2.6. pg_stat_wal_receiver

The pg_stat_wal_receiver view will contain only one row, showing statistics about the WAL receiver from that receiver’s connected server.

pg_stat_wal_receiver View

Column TypeDescription

pid `integer`Process ID of the WAL receiver process

status `text`Activity status of the WAL receiver process

receive_start_lsn `pg_lsn`First write-ahead log location used when WAL receiver is started

receive_start_tli `integer`First timeline number used when WAL receiver is started

written_lsn `pg_lsn`Last write-ahead log location already received and written to disk, but not flushed. This should not be used for data integrity checks.

flushed_lsn `pg_lsn`Last write-ahead log location already received and flushed to disk, the initial value of this field being the first log location used when WAL receiver is started

received_tli `integer`Timeline number of last write-ahead log location received and flushed to disk, the initial value of this field being the timeline number of the first log location used when WAL receiver is started

last_msg_send_time `timestamp with time zone`Send time of last message received from origin WAL sender

last_msg_receipt_time `timestamp with time zone`Receipt time of last message received from origin WAL sender

latest_end_lsn `pg_lsn`Last write-ahead log location reported to origin WAL sender

latest_end_time `timestamp with time zone`Time of last write-ahead log location reported to origin WAL sender

slot_name `text`Replication slot name used by this WAL receiver

sender_host text`Host of the IvorySQL instance this WAL receiver is connected to. This can be a host name, an IP address, or a directory path if the connection is via Unix socket. (The path case can be distinguished because it will always be an absolute path, beginning with `/.)

sender_port `integer`Port number of the IvorySQL instance this WAL receiver is connected to.

conninfo `text`Connection string used by this WAL receiver, with security-sensitive fields obfuscated.

1.2.7. pg_stat_recovery_prefetch

The pg_stat_recovery_prefetch view will contain only one row. The columns wal_distance, block_distance and io_depth show current values, and the other columns show cumulative counters that can be reset with the pg_stat_reset_shared function.

pg_stat_recovery_prefetch View

Column TypeDescription

stats_reset `timestamp with time zone`Time at which these statistics were last reset

prefetch `bigint`Number of blocks prefetched because they were not in the buffer pool

hit `bigint`Number of blocks not prefetched because they were already in the buffer pool

skip_init `bigint`Number of blocks not prefetched because they would be zero-initialized

skip_new `bigint`Number of blocks not prefetched because they didn’t exist yet

skip_fpw `bigint`Number of blocks not prefetched because a full page image was included in the WAL

skip_rep `bigint`Number of blocks not prefetched because they were already recently prefetched

wal_distance `int`How many bytes ahead the prefetcher is looking

block_distance `int`How many blocks ahead the prefetcher is looking

io_depth `int`How many prefetches have been initiated but are not yet known to have completed

1.2.8. pg_stat_subscription

pg_stat_subscription View

Column TypeDescription

subid `oid`OID of the subscription

subname `name`Name of the subscription

pid `integer`Process ID of the subscription worker process

relid `oid`OID of the relation that the worker is synchronizing; null for the main apply worker

received_lsn `pg_lsn`Last write-ahead log location received, the initial value of this field being 0

last_msg_send_time `timestamp with time zone`Send time of last message received from origin WAL sender

last_msg_receipt_time `timestamp with time zone`Receipt time of last message received from origin WAL sender

latest_end_lsn `pg_lsn`Last write-ahead log location reported to origin WAL sender

latest_end_time `timestamp with time zone`Time of last write-ahead log location reported to origin WAL sender

1.2.9. pg_stat_subscription_stats

The pg_stat_subscription_stats view will contain one row per subscription.

pg_stat_subscription_stats View

Column TypeDescription

subid `oid`OID of the subscription

subname `name`Name of the subscription

apply_error_count `bigint`Number of times an error occurred while applying changes

sync_error_count `bigint`Number of times an error occurred during the initial table synchronization

stats_reset `timestamp with time zone`Time at which these statistics were last reset

1.2.10. pg_stat_ssl

The pg_stat_ssl view will contain one row per backend or WAL sender process, showing statistics about SSL usage on this connection. It can be joined to pg_stat_activity or pg_stat_replication on the pid column to get more details about the connection.

pg_stat_ssl View

Column TypeDescription

pid `integer`Process ID of a backend or WAL sender process

ssl `boolean`True if SSL is used on this connection

version `text`Version of SSL in use, or NULL if SSL is not in use on this connection

cipher `text`Name of SSL cipher in use, or NULL if SSL is not in use on this connection

bits `integer`Number of bits in the encryption algorithm used, or NULL if SSL is not used on this connection

client_dn text`Distinguished Name (DN) field from the client certificate used, or NULL if no client certificate was supplied or if SSL is not in use on this connection. This field is truncated if the DN field is longer than `NAMEDATALEN (64 characters in a standard build).

client_serial `numeric`Serial number of the client certificate, or NULL if no client certificate was supplied or if SSL is not in use on this connection. The combination of certificate serial number and certificate issuer uniquely identifies a certificate (unless the issuer erroneously reuses serial numbers).

issuer_dn text`DN of the issuer of the client certificate, or NULL if no client certificate was supplied or if SSL is not in use on this connection. This field is truncated like `client_dn.

1.2.11. pg_stat_gssapi

The pg_stat_gssapi view will contain one row per backend, showing information about GSSAPI usage on this connection. It can be joined to pg_stat_activity or pg_stat_replication on the pid column to get more details about the connection.

pg_stat_gssapi View

Column TypeDescription

pid `integer`Process ID of a backend

gss_authenticated `boolean`True if GSSAPI authentication was used for this connection

principal text`Principal used to authenticate this connection, or NULL if GSSAPI was not used to authenticate this connection. This field is truncated if the principal is longer than `NAMEDATALEN (64 characters in a standard build).

encrypted `boolean`True if GSSAPI encryption is in use on this connection

1.2.12. pg_stat_archiver

The pg_stat_archiver view will always have a single row, containing data about the archiver process of the cluster.

pg_stat_archiver View

archived_count `bigint`Number of WAL files that have been successfully archived

last_archived_wal `text`Name of the WAL file most recently successfully archived

last_archived_time `timestamp with time zone`Time of the most recent successful archive operation

failed_count `bigint`Number of failed attempts for archiving WAL files

last_failed_wal `text`Name of the WAL file of the most recent failed archival operation

last_failed_time `timestamp with time zone`Time of the most recent failed archival operation

stats_reset `timestamp with time zone`Time at which these statistics were last reset

Normally, WAL files are archived in order, oldest to newest, but that is not guaranteed, and does not hold under special circumstances like when promoting a standby or after crash recovery. Therefore it is not safe to assume that all files older than last_archived_wal have also been successfully archived.

1.2.13. pg_stat_bgwriter

The pg_stat_bgwriter view will always have a single row, containing global data for the cluster.

pg_stat_bgwriter View

Column TypeDescription

checkpoints_timed `bigint`Number of scheduled checkpoints that have been performed

checkpoints_req `bigint`Number of requested checkpoints that have been performed

checkpoint_write_time `double precision`Total amount of time that has been spent in the portion of checkpoint processing where files are written to disk, in milliseconds

checkpoint_sync_time `double precision`Total amount of time that has been spent in the portion of checkpoint processing where files are synchronized to disk, in milliseconds

buffers_checkpoint `bigint`Number of buffers written during checkpoints

buffers_clean `bigint`Number of buffers written by the background writer

maxwritten_clean `bigint`Number of times the background writer stopped a cleaning scan because it had written too many buffers

buffers_backend `bigint`Number of buffers written directly by a backend

buffers_backend_fsync bigint`Number of times a backend had to execute its own `fsync call (normally the background writer handles those even when the backend does its own write)

buffers_alloc `bigint`Number of buffers allocated

stats_reset `timestamp with time zone`Time at which these statistics were last reset

1.2.14. pg_stat_wal

The pg_stat_wal view will always have a single row, containing data about WAL activity of the cluster.

pg_stat_wal View

Column TypeDescription

wal_records `bigint`Total number of WAL records generated

wal_fpi `bigint`Total number of WAL full page images generated

wal_bytes `numeric`Total amount of WAL generated in bytes

wal_buffers_full `bigint`Number of times WAL data was written to disk because WAL buffers became full

wal_write bigint`Number of times WAL buffers were written out to disk via `XLogWrite request.

wal_sync bigint`Number of times WAL files were synced to disk via `issue_xlog_fsync request (if fsync is on and wal_sync_method is either fdatasync, fsync or fsync_writethrough, otherwise zero).

wal_write_time double precision`Total amount of time spent writing WAL buffers to disk via `XLogWrite request, in milliseconds (if track_wal_io_timing is enabled, otherwise zero). This includes the sync time when wal_sync_method is either open_datasync or open_sync.

wal_sync_time double precision`Total amount of time spent syncing WAL files to disk via `issue_xlog_fsync request, in milliseconds (if track_wal_io_timing is enabled, fsync is on, and wal_sync_method is either fdatasync, fsync or fsync_writethrough, otherwise zero).

stats_reset `timestamp with time zone`Time at which these statistics were last reset

1.2.15. pg_stat_database

The pg_stat_database view will contain one row for each database in the cluster, plus one for shared objects, showing database-wide statistics.

pg_stat_database View

Column TypeDescription

datid `oid`OID of this database, or 0 for objects belonging to a shared relation

datname name`Name of this database, or `NULL for shared objects.

numbackends integer`Number of backends currently connected to this database, or `NULL for shared objects. This is the only column in this view that returns a value reflecting current state; all other columns return the accumulated values since the last reset.

xact_commit `bigint`Number of transactions in this database that have been committed

xact_rollback `bigint`Number of transactions in this database that have been rolled back

blks_read `bigint`Number of disk blocks read in this database

blks_hit `bigint`Number of times disk blocks were found already in the buffer cache, so that a read was not necessary (this only includes hits in the IvorySQL buffer cache, not the operating system’s file system cache)

tup_returned `bigint`Number of live rows fetched by sequential scans and index entries returned by index scans in this database

tup_fetched `bigint`Number of live rows fetched by index scans in this database

tup_inserted `bigint`Number of rows inserted by queries in this database

tup_updated `bigint`Number of rows updated by queries in this database

tup_deleted `bigint`Number of rows deleted by queries in this database

conflicts bigint`Number of queries canceled due to conflicts with recovery in this database. (Conflicts occur only on standby servers; see `pg_stat_database_conflicts for details.)

temp_files `bigint`Number of temporary files created by queries in this database. All temporary files are counted, regardless of why the temporary file was created (e.g., sorting or hashing), and regardless of the log_temp_files setting.

temp_bytes `bigint`Total amount of data written to temporary files by queries in this database. All temporary files are counted, regardless of why the temporary file was created, and regardless of the log_temp_files setting.

deadlocks `bigint`Number of deadlocks detected in this database

checksum_failures `bigint`Number of data page checksum failures detected in this database (or on a shared object), or NULL if data checksums are not enabled.

checksum_last_failure `timestamp with time zone`Time at which the last data page checksum failure was detected in this database (or on a shared object), or NULL if data checksums are not enabled.

blk_read_time `double precision`Time spent reading data file blocks by backends in this database, in milliseconds (if track_io_timing is enabled, otherwise zero)

blk_write_time `double precision`Time spent writing data file blocks by backends in this database, in milliseconds (if track_io_timing is enabled, otherwise zero)

session_time `double precision`Time spent by database sessions in this database, in milliseconds (note that statistics are only updated when the state of a session changes, so if sessions have been idle for a long time, this idle time won’t be included)

active_time double precision`Time spent executing SQL statements in this database, in milliseconds (this corresponds to the states `active and fastpath function call in pg_stat_activity)

idle_in_transaction_time double precision`Time spent idling while in a transaction in this database, in milliseconds (this corresponds to the states `idle in transaction and idle in transaction (aborted) in pg_stat_activity)

sessions `bigint`Total number of sessions established to this database

sessions_abandoned `bigint`Number of database sessions to this database that were terminated because connection to the client was lost

sessions_fatal `bigint`Number of database sessions to this database that were terminated by fatal errors

sessions_killed `bigint`Number of database sessions to this database that were terminated by operator intervention

stats_reset `timestamp with time zone`Time at which these statistics were last reset

1.2.16. pg_stat_database_conflicts

The pg_stat_database_conflicts view will contain one row per database, showing database-wide statistics about query cancels occurring due to conflicts with recovery on standby servers. This view will only contain information on standby servers, since conflicts do not occur on primary servers.

pg_stat_database_conflicts View

Column TypeDescription

datid `oid`OID of a database

datname `name`Name of this database

confl_tablespace `bigint`Number of queries in this database that have been canceled due to dropped tablespaces

confl_lock `bigint`Number of queries in this database that have been canceled due to lock timeouts

confl_snapshot `bigint`Number of queries in this database that have been canceled due to old snapshots

confl_bufferpin `bigint`Number of queries in this database that have been canceled due to pinned buffers

confl_deadlock `bigint`Number of queries in this database that have been canceled due to deadlocks

1.2.17. pg_stat_all_tables

The pg_stat_all_tables view will contain one row for each table in the current database (including TOAST tables), showing statistics about accesses to that specific table. The pg_stat_user_tables and pg_stat_sys_tables views contain the same information, but filtered to only show user and system tables respectively.

pg_stat_all_tables View

Column TypeDescription

relid `oid`OID of a table

schemaname `name`Name of the schema that this table is in

relname `name`Name of this table

seq_scan `bigint`Number of sequential scans initiated on this table

seq_tup_read `bigint`Number of live rows fetched by sequential scans

idx_scan `bigint`Number of index scans initiated on this table

idx_tup_fetch `bigint`Number of live rows fetched by index scans

n_tup_ins `bigint`Number of rows inserted

n_tup_upd `bigint`Number of rows updated (includes HOT updated rows)

n_tup_del `bigint`Number of rows deleted

n_tup_hot_upd `bigint`Number of rows HOT updated (i.e., with no separate index update required)

n_live_tup `bigint`Estimated number of live rows

n_dead_tup `bigint`Estimated number of dead rows

n_mod_since_analyze `bigint`Estimated number of rows modified since this table was last analyzed

n_ins_since_vacuum `bigint`Estimated number of rows inserted since this table was last vacuumed

last_vacuum timestamp with time zone`Last time at which this table was manually vacuumed (not counting `VACUUM FULL)

last_autovacuum `timestamp with time zone`Last time at which this table was vacuumed by the autovacuum daemon

last_analyze `timestamp with time zone`Last time at which this table was manually analyzed

last_autoanalyze `timestamp with time zone`Last time at which this table was analyzed by the autovacuum daemon

vacuum_count bigint`Number of times this table has been manually vacuumed (not counting `VACUUM FULL)

autovacuum_count `bigint`Number of times this table has been vacuumed by the autovacuum daemon

analyze_count `bigint`Number of times this table has been manually analyzed

autoanalyze_count `bigint`Number of times this table has been analyzed by the autovacuum daemon

1.2.18. pg_stat_all_indexes

The pg_stat_all_indexes view will contain one row for each index in the current database, showing statistics about accesses to that specific index. The pg_stat_user_indexes and pg_stat_sys_indexes views contain the same information, but filtered to only show user and system indexes respectively.

pg_stat_all_indexes View

Column TypeDescription

relid `oid`OID of the table for this index

indexrelid `oid`OID of this index

schemaname `name`Name of the schema this index is in

relname `name`Name of the table for this index

indexrelname `name`Name of this index

idx_scan `bigint`Number of index scans initiated on this index

idx_tup_read `bigint`Number of index entries returned by scans on this index

idx_tup_fetch `bigint`Number of live table rows fetched by simple index scans using this index

Indexes can be used by simple index scans, “bitmap” index scans, and the optimizer. In a bitmap scan the output of several indexes can be combined via AND or OR rules, so it is difficult to associate individual heap row fetches with specific indexes when a bitmap scan is used. Therefore, a bitmap scan increments the pg_stat_all_indexes.idx_tup_read count(s) for the index(es) it uses, and it increments the pg_stat_all_tables.idx_tup_fetch count for the table, but it does not affect pg_stat_all_indexes.idx_tup_fetch. The optimizer also accesses indexes to check for supplied constants whose values are outside the recorded range of the optimizer statistics because the optimizer statistics might be stale.

Note

The idx_tup_read and idx_tup_fetch counts can be different even without any use of bitmap scans, because idx_tup_read counts index entries retrieved from the index while idx_tup_fetch counts live rows fetched from the table. The latter will be less if any dead or not-yet-committed rows are fetched using the index, or if any heap fetches are avoided by means of an index-only scan.

1.2.19. pg_statio_all_tables

The pg_statio_all_tables view will contain one row for each table in the current database (including TOAST tables), showing statistics about I/O on that specific table. The pg_statio_user_tables and pg_statio_sys_tables views contain the same information, but filtered to only show user and system tables respectively.

pg_statio_all_tables View

Column TypeDescription

relid `oid`OID of a table

schemaname `name`Name of the schema that this table is in

relname `name`Name of this table

heap_blks_read `bigint`Number of disk blocks read from this table

heap_blks_hit `bigint`Number of buffer hits in this table

idx_blks_read `bigint`Number of disk blocks read from all indexes on this table

idx_blks_hit `bigint`Number of buffer hits in all indexes on this table

toast_blks_read `bigint`Number of disk blocks read from this table’s TOAST table (if any)

toast_blks_hit `bigint`Number of buffer hits in this table’s TOAST table (if any)

tidx_blks_read `bigint`Number of disk blocks read from this table’s TOAST table indexes (if any)

tidx_blks_hit `bigint`Number of buffer hits in this table’s TOAST table indexes (if any)

1.2.20. pg_statio_all_indexes

The pg_statio_all_indexes view will contain one row for each index in the current database, showing statistics about I/O on that specific index. The pg_statio_user_indexes and pg_statio_sys_indexes views contain the same information, but filtered to only show user and system indexes respectively.

pg_statio_all_indexes View

Column TypeDescription

relid `oid`OID of the table for this index

indexrelid `oid`OID of this index

schemaname `name`Name of the schema this index is in

relname `name`Name of the table for this index

indexrelname `name`Name of this index

idx_blks_read `bigint`Number of disk blocks read from this index

idx_blks_hit `bigint`Number of buffer hits in this index

1.2.21. pg_statio_all_sequences

The pg_statio_all_sequences view will contain one row for each sequence in the current database, showing statistics about I/O on that specific sequence.

pg_statio_all_sequences View

Column TypeDescription

relid `oid`OID of a sequence

schemaname `name`Name of the schema this sequence is in

relname `name`Name of this sequence

blks_read `bigint`Number of disk blocks read from this sequence

blks_hit `bigint`Number of buffer hits in this sequence

1.2.22. pg_stat_user_functions

The pg_stat_user_functions view will contain one row for each tracked function, showing statistics about executions of that function. The track_functions parameter controls exactly which functions are tracked.

pg_stat_user_functions View

Column TypeDescription

funcid `oid`OID of a function

schemaname `name`Name of the schema this function is in

funcname `name`Name of this function

calls `bigint`Number of times this function has been called

total_time `double precision`Total time spent in this function and all other functions called by it, in milliseconds

self_time `double precision`Total time spent in this function itself, not including other functions called by it, in milliseconds

1.2.23. pg_stat_slru

IvorySQL accesses certain on-disk information via SLRU (simple least-recently-used) caches. The pg_stat_slru view will contain one row for each tracked SLRU cache, showing statistics about access to cached pages.

pg_stat_slru View

Column TypeDescription

name `text`Name of the SLRU

blks_zeroed `bigint`Number of blocks zeroed during initializations

blks_hit `bigint`Number of times disk blocks were found already in the SLRU, so that a read was not necessary (this only includes hits in the SLRU, not the operating system’s file system cache)

blks_read `bigint`Number of disk blocks read for this SLRU

blks_written `bigint`Number of disk blocks written for this SLRU

blks_exists `bigint`Number of blocks checked for existence for this SLRU

flushes `bigint`Number of flushes of dirty data for this SLRU

truncates `bigint`Number of truncates for this SLRU

stats_reset `timestamp with time zone`Time at which these statistics were last reset

1.2.24. Statistics Functions

Other ways of looking at the statistics can be set up by writing queries that use the same underlying statistics access functions used by the standard views shown above. For details such as the functions' names, consult the definitions of the standard views. (For example, in psql you could issue \d+ pg_stat_activity.) The access functions for per-database statistics take a database OID as an argument to identify which database to report on. The per-table and per-index functions take a table or index OID. The functions for per-function statistics take a function OID. Note that only tables, indexes, and functions in the current database can be seen with these functions.

Additional Statistics Functions

FunctionDescription

pg_backend_pid () → `integer`Returns the process ID of the server process attached to the current session.

pg_stat_get_activity ( integer ) → setof record`Returns a record of information about the backend with the specified process ID, or one record for each active backend in the system if `NULL is specified. The fields returned are a subset of those in the pg_stat_activity view.

pg_stat_get_snapshot_timestamp () → timestamp with time zone`Returns the timestamp of the current statistics snapshot, or NULL if no statistics snapshot has been taken. A snapshot is taken the first time cumulative statistics are accessed in a transaction if `stats_fetch_consistency is set to snapshot

pg_stat_clear_snapshot () → `void`Discards the current statistics snapshot or cached information.

pg_stat_reset () → `void`Resets all statistics counters for the current database to zero.This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.

pg_stat_reset_shared ( text ) → void`Resets some cluster-wide statistics counters to zero, depending on the argument. The argument can be `bgwriter to reset all the counters shown in the pg_stat_bgwriter view, archiver to reset all the counters shown in the pg_stat_archiver view, wal to reset all the counters shown in the pg_stat_wal view or recovery_prefetch to reset all the counters shown in the pg_stat_recovery_prefetch view.This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.

pg_stat_reset_single_table_counters ( oid ) → `void`Resets statistics for a single table or index in the current database or shared across all databases in the cluster to zero.This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.

pg_stat_reset_single_function_counters ( oid ) → `void`Resets statistics for a single function in the current database to zero.This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.

pg_stat_reset_slru ( text ) → void`Resets statistics to zero for a single SLRU cache, or for all SLRUs in the cluster. If the argument is NULL, all counters shown in the `pg_stat_slru view for all SLRU caches are reset. The argument can be one of CommitTs, MultiXactMember, MultiXactOffset, Notify, Serial, Subtrans, or Xact to reset the counters for only that entry. If the argument is other (or indeed, any unrecognized name), then the counters for all other SLRU caches, such as extension-defined caches, are reset.This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.

pg_stat_reset_replication_slot ( text ) → void`Resets statistics of the replication slot defined by the argument. If the argument is `NULL, resets statistics for all the replication slots.This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.

pg_stat_reset_subscription_stats ( oid ) → void`Resets statistics for a single subscription shown in the `pg_stat_subscription_stats view to zero. If the argument is NULL, reset statistics for all subscriptions.This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.

Warning

Using pg_stat_reset() also resets counters that autovacuum uses to determine when to trigger a vacuum or an analyze. Resetting these counters can cause autovacuum to not perform necessary work, which can cause problems such as table bloat or out-dated table statistics. A database-wide ANALYZE is recommended after the statistics have been reset.

pg_stat_get_activity, the underlying function of the pg_stat_activity view, returns a set of records containing all the available information about each backend process. Sometimes it may be more convenient to obtain just a subset of this information. In such cases, an older set of per-backend statistics access functions can be used; These access functions use a backend ID number, which ranges from one to the number of currently active backends. The function pg_stat_get_backend_idset provides a convenient way to generate one row for each active backend for invoking these functions. For example, to show the PIDs and current queries of all backends:

SELECT pg_stat_get_backend_pid(s.backendid) AS pid,
       pg_stat_get_backend_activity(s.backendid) AS query
    FROM (SELECT pg_stat_get_backend_idset() AS backendid) AS s;
Per-Backend Statistics Functions

FunctionDescription

pg_stat_get_backend_idset () → `setof integer`Returns the set of currently active backend ID numbers (from 1 to the number of active backends).

pg_stat_get_backend_activity ( integer ) → `text`Returns the text of this backend’s most recent query.

pg_stat_get_backend_activity_start ( integer ) → `timestamp with time zone`Returns the time when the backend’s most recent query was started.

pg_stat_get_backend_client_addr ( integer ) → `inet`Returns the IP address of the client connected to this backend.

pg_stat_get_backend_client_port ( integer ) → `integer`Returns the TCP port number that the client is using for communication.

pg_stat_get_backend_dbid ( integer ) → `oid`Returns the OID of the database this backend is connected to.

pg_stat_get_backend_pid ( integer ) → `integer`Returns the process ID of this backend.

pg_stat_get_backend_start ( integer ) → `timestamp with time zone`Returns the time when this process was started.

pg_stat_get_backend_userid ( integer ) → `oid`Returns the OID of the user logged into this backend.

pg_stat_get_backend_wait_event_type ( integer ) → `text`Returns the wait event type name if this backend is currently waiting, otherwise NULL.

pg_stat_get_backend_wait_event ( integer ) → `text`Returns the wait event name if this backend is currently waiting, otherwise NULL.

pg_stat_get_backend_xact_start ( integer ) → `timestamp with time zone`Returns the time when the backend’s current transaction was started.

1.3. View Locks

  • Another useful tool for monitoring database activity is the pg_locks system table. It allows the database administrator to view information about the outstanding locks in the lock manager. For example, this capability can be used to:

    • View all the locks currently outstanding, all the locks on relations in a particular database, all the locks on a particular relation, or all the locks held by a particular IvorySQL session.

    • Determine the relation in the current database with the most ungranted locks (which might be a source of contention among database clients).

    • Determine the effect of lock contention on overall database performance, as well as the extent to which contention varies with overall database traffic.

1.4. Progress Reporting

IvorySQL has the ability to report the progress of certain commands during command execution. Currently, the only commands which support progress reporting are ANALYZE, CLUSTER, CREATE INDEX, VACUUM, COPY, and BASE_BACKUP (i.e., replication command that pg_basebackup issues to take a base backup). This may be expanded in the future.

1.4.1. ANALYZE Progress Reporting

Whenever ANALYZE is running, the pg_stat_progress_analyze view will contain a row for each backend that is currently running that command. The tables below describe the information that will be reported and provide information about how to interpret it.

pg_stat_progress_analyze View

Column TypeDescription

pid `integer`Process ID of backend.

datid `oid`OID of the database to which this backend is connected.

datname `name`Name of the database to which this backend is connected.

relid `oid`OID of the table being analyzed.

phase `text`Current processing phase. See Table 1.37.

sample_blks_total `bigint`Total number of heap blocks that will be sampled.

sample_blks_scanned `bigint`Number of heap blocks scanned.

ext_stats_total `bigint`Number of extended statistics.

ext_stats_computed bigint`Number of extended statistics computed. This counter only advances when the phase is `computing extended statistics.

child_tables_total `bigint`Number of child tables.

child_tables_done bigint`Number of child tables scanned. This counter only advances when the phase is `acquiring inherited sample rows.

current_child_table_relid oid`OID of the child table currently being scanned. This field is only valid when the phase is `acquiring inherited sample rows.

ANALYZE Phases

Phase

Description

initializing

The command is preparing to begin scanning the heap. This phase is expected to be very brief.

acquiring sample rows

The command is currently scanning the table given by relid to obtain sample rows.

acquiring inherited sample rows

The command is currently scanning child tables to obtain sample rows. Columns child_tables_total, child_tables_done, and current_child_table_relid contain the progress information for this phase.

computing statistics

The command is computing statistics from the sample rows obtained during the table scan.

computing extended statistics

The command is computing extended statistics from the sample rows obtained during the table scan.

finalizing analyze

The command is updating pg_class. When this phase is completed, ANALYZE will end.

Note

Note that when ANALYZE is run on a partitioned table, all of its partitions are also recursively analyzed. In that case, ANALYZE progress is reported first for the parent table, whereby its inheritance statistics are collected, followed by that for each partition.

1.4.2. CREATE INDEX Progress Reporting

Whenever CREATE INDEX or REINDEX is running, the pg_stat_progress_create_index view will contain one row for each backend that is currently creating indexes. The tables below describe the information that will be reported and provide information about how to interpret it.

pg_stat_progress_create_index View

Column TypeDescription

pid `integer`Process ID of backend.

datid `oid`OID of the database to which this backend is connected.

datname `name`Name of the database to which this backend is connected.

relid `oid`OID of the table on which the index is being created.

index_relid oid`OID of the index being created or reindexed. During a non-concurrent `CREATE INDEX, this is 0.

command text`The command that is running: `CREATE INDEX, CREATE INDEX CONCURRENTLY, REINDEX, or REINDEX CONCURRENTLY.

phase `text`Current processing phase of index creation. See Table 1.39.

lockers_total `bigint`Total number of lockers to wait for, when applicable.

lockers_done `bigint`Number of lockers already waited for.

current_locker_pid `bigint`Process ID of the locker currently being waited for.

blocks_total `bigint`Total number of blocks to be processed in the current phase.

blocks_done `bigint`Number of blocks already processed in the current phase.

tuples_total `bigint`Total number of tuples to be processed in the current phase.

tuples_done `bigint`Number of tuples already processed in the current phase.

partitions_total bigint`When creating an index on a partitioned table, this column is set to the total number of partitions on which the index is to be created. This field is `0 during a REINDEX.

partitions_done bigint`When creating an index on a partitioned table, this column is set to the number of partitions on which the index has been created. This field is `0 during a REINDEX.

CREATE INDEX Phases

Phase

Description

initializing

CREATE INDEX or REINDEX is preparing to create the index. This phase is expected to be very brief.

waiting for writers before build

CREATE INDEX CONCURRENTLY or REINDEX CONCURRENTLY is waiting for transactions with write locks that can potentially see the table to finish. This phase is skipped when not in concurrent mode. Columns lockers_total, lockers_done and current_locker_pid contain the progress information for this phase.

building index

The index is being built by the access method-specific code. In this phase, access methods that support progress reporting fill in their own progress data, and the subphase is indicated in this column. Typically, blocks_total and blocks_done will contain progress data, as well as potentially tuples_total and tuples_done.

waiting for writers before validation

CREATE INDEX CONCURRENTLY or REINDEX CONCURRENTLY is waiting for transactions with write locks that can potentially write into the table to finish. This phase is skipped when not in concurrent mode. Columns lockers_total, lockers_done and current_locker_pid contain the progress information for this phase.

index validation: scanning index

CREATE INDEX CONCURRENTLY is scanning the index searching for tuples that need to be validated. This phase is skipped when not in concurrent mode. Columns blocks_total (set to the total size of the index) and blocks_done contain the progress information for this phase.

index validation: sorting tuples

CREATE INDEX CONCURRENTLY is sorting the output of the index scanning phase.

index validation: scanning table

CREATE INDEX CONCURRENTLY is scanning the table to validate the index tuples collected in the previous two phases. This phase is skipped when not in concurrent mode. Columns blocks_total (set to the total size of the table) and blocks_done contain the progress information for this phase.

waiting for old snapshots

CREATE INDEX CONCURRENTLY or REINDEX CONCURRENTLY is waiting for transactions that can potentially see the table to release their snapshots. This phase is skipped when not in concurrent mode. Columns lockers_total, lockers_done and current_locker_pid contain the progress information for this phase.

waiting for readers before marking dead

REINDEX CONCURRENTLY is waiting for transactions with read locks on the table to finish, before marking the old index dead. This phase is skipped when not in concurrent mode. Columns lockers_total, lockers_done and current_locker_pid contain the progress information for this phase.

waiting for readers before dropping

REINDEX CONCURRENTLY is waiting for transactions with read locks on the table to finish, before dropping the old index. This phase is skipped when not in concurrent mode. Columns lockers_total, lockers_done and current_locker_pid contain the progress information for this phase.

1.4.3. VACUUM Progress Reporting

Whenever VACUUM is running, the pg_stat_progress_vacuum view will contain one row for each backend (including autovacuum worker processes) that is currently vacuuming. The tables below describe the information that will be reported and provide information about how to interpret it. Progress for VACUUM FULL commands is reported via pg_stat_progress_cluster because both VACUUM FULL and CLUSTER rewrite the table, while regular VACUUM only modifies it in place.

pg_stat_progress_vacuum View

Column TypeDescription

pid `integer`Process ID of backend.

datid `oid`OID of the database to which this backend is connected.

datname `name`Name of the database to which this backend is connected.

relid `oid`OID of the table being vacuumed.

phase `text`Current processing phase of vacuum.

heap_blks_total bigint`Total number of heap blocks in the table. This number is reported as of the beginning of the scan; blocks added later will not be (and need not be) visited by this `VACUUM.

heap_blks_scanned bigint`Number of heap blocks scanned. Because the visibility map is used to optimize scans, some blocks will be skipped without inspection; skipped blocks are included in this total, so that this number will eventually become equal to `heap_blks_total when the vacuum is complete. This counter only advances when the phase is scanning heap.

heap_blks_vacuumed bigint`Number of heap blocks vacuumed. Unless the table has no indexes, this counter only advances when the phase is `vacuuming heap. Blocks that contain no dead tuples are skipped, so the counter may sometimes skip forward in large increments.

index_vacuum_count `bigint`Number of completed index vacuum cycles.

max_dead_tuples `bigint`Number of dead tuples that we can store before needing to perform an index vacuum cycle, based on maintenance_work_mem.

num_dead_tuples `bigint`Number of dead tuples collected since the last index vacuum cycle.

VACUUM Phases

Phase

Description

initializing

VACUUM is preparing to begin scanning the heap. This phase is expected to be very brief.

scanning heap

VACUUM is currently scanning the heap. It will prune and defragment each page if required, and possibly perform freezing activity. The heap_blks_scanned column can be used to monitor the progress of the scan.

vacuuming indexes

VACUUM is currently vacuuming the indexes. If a table has any indexes, this will happen at least once per vacuum, after the heap has been completely scanned. It may happen multiple times per vacuum if maintenance_work_mem (or, in the case of autovacuum, autovacuum_work_mem if set) is insufficient to store the number of dead tuples found.

vacuuming heap

VACUUM is currently vacuuming the heap. Vacuuming the heap is distinct from scanning the heap, and occurs after each instance of vacuuming indexes. If heap_blks_scanned is less than heap_blks_total, the system will return to scanning the heap after this phase is completed; otherwise, it will begin cleaning up indexes after this phase is completed.

cleaning up indexes

VACUUM is currently cleaning up indexes. This occurs after the heap has been completely scanned and all vacuuming of the indexes and the heap has been completed.

truncating heap

VACUUM is currently truncating the heap so as to return empty pages at the end of the relation to the operating system. This occurs after cleaning up indexes.

performing final cleanup

VACUUM is performing final cleanup. During this phase, VACUUM will vacuum the free space map, update statistics in pg_class, and report statistics to the cumulative statistics system. When this phase is completed, VACUUM will end.

1.4.4. CLUSTER Progress Reporting

Whenever CLUSTER or VACUUM FULL is running, the pg_stat_progress_cluster view will contain a row for each backend that is currently running either command. The tables below describe the information that will be reported and provide information about how to interpret it.

pg_stat_progress_cluster View

Column TypeDescriptio

pid `integer`Process ID of backend.

datid `oid`OID of the database to which this backend is connected.

datname `name`Name of the database to which this backend is connected.

relid `oid`OID of the table being clustered.

command text`The command that is running. Either `CLUSTER or VACUUM FULL.

phase `text`Current processing phase. See Table 1.43.

cluster_index_relid `oid`If the table is being scanned using an index, this is the OID of the index being used; otherwise, it is zero.

heap_tuples_scanned bigint`Number of heap tuples scanned. This counter only advances when the phase is `seq scanning heap, index scanning heap or writing new heap.

heap_tuples_written bigint`Number of heap tuples written. This counter only advances when the phase is `seq scanning heap, index scanning heap or writing new heap.

heap_blks_total bigint`Total number of heap blocks in the table. This number is reported as of the beginning of `seq scanning heap.

heap_blks_scanned bigint`Number of heap blocks scanned. This counter only advances when the phase is `seq scanning heap.

index_rebuild_count bigint`Number of indexes rebuilt. This counter only advances when the phase is `rebuilding index.

CLUSTER and VACUUM FULL Phases

Phase

Description

initializing

The command is preparing to begin scanning the heap. This phase is expected to be very brief.

seq scanning heap

The command is currently scanning the table using a sequential scan.

index scanning heap

CLUSTER is currently scanning the table using an index scan.

sorting tuples

CLUSTER is currently sorting tuples.

writing new heap

CLUSTER is currently writing the new heap.

swapping relation files

The command is currently swapping newly-built files into place.

rebuilding index

The command is currently rebuilding an index.

performing final cleanup

The command is performing final cleanup. When this phase is completed, CLUSTER or VACUUM FULL will end.

1.4.5. Base Backup Progress Reporting

Whenever an application like pg_basebackup is taking a base backup, the pg_stat_progress_basebackup view will contain a row for each WAL sender process that is currently running the BASE_BACKUP replication command and streaming the backup. The tables below describe the information that will be reported and provide information about how to interpret it.

pg_stat_progress_basebackup View

Column TypeDescription

pid `integer`Process ID of a WAL sender process.

phase `text`Current processing phase.

backup_total bigint`Total amount of data that will be streamed. This is estimated and reported as of the beginning of `streaming database files phase. Note that this is only an approximation since the database may change during streaming database files phase and WAL log may be included in the backup later. This is always the same value as backup_streamed once the amount of data streamed exceeds the estimated total size. If the estimation is disabled in pg_basebackup (i.e., --no-estimate-size option is specified), this is NULL.

backup_streamed bigint`Amount of data streamed. This counter only advances when the phase is `streaming database files or transferring wal files.

tablespaces_total `bigint`Total number of tablespaces that will be streamed.

tablespaces_streamed bigint`Number of tablespaces streamed. This counter only advances when the phase is `streaming database files.

Base Backup Phases

Phase

Description

initializing

The WAL sender process is preparing to begin the backup. This phase is expected to be very brief.

waiting for checkpoint to finish

The WAL sender process is currently performing pg_backup_start to prepare to take a base backup, and waiting for the start-of-backup checkpoint to finish.

estimating backup size

The WAL sender process is currently estimating the total amount of database files that will be streamed as a base backup.

streaming database files

The WAL sender process is currently streaming database files as a base backup.

waiting for wal archiving to finish

The WAL sender process is currently performing pg_backup_stop to finish the backup, and waiting for all the WAL files required for the base backup to be successfully archived. If either --wal-method=none or --wal-method=stream is specified in pg_basebackup, the backup will end when this phase is completed.

transferring wal files

The WAL sender process is currently transferring all WAL logs generated during the backup. This phase occurs after waiting for wal archiving to finish phase if --wal-method=fetch is specified in pg_basebackup. The backup will end when this phase is completed.

1.4.6. COPY Progress Reporting

Whenever COPY is running, the pg_stat_progress_copy view will contain one row for each backend that is currently running a COPY command. The table below describes the information that will be reported and provides information about how to interpret it.

pg_stat_progress_copy View

Column TypeDescription

pid `integer`Process ID of backend.

datid `oid`OID of the database to which this backend is connected.

datname `name`Name of the database to which this backend is connected.

relid oid`OID of the table on which the `COPY command is executed. It is set to 0 if copying from a SELECT query.

command text`The command that is running: `COPY FROM, or COPY TO.

type text`The io type that the data is read from or written to: `FILE, PROGRAM, PIPE (for COPY FROM STDIN and COPY TO STDOUT), or CALLBACK (used for example during the initial table synchronization in logical replication).

bytes_processed bigint`Number of bytes already processed by `COPY command.

bytes_total bigint`Size of source file for `COPY FROM command in bytes. It is set to 0 if not available.

tuples_processed bigint`Number of tuples already processed by `COPY command.

tuples_excluded bigint`Number of tuples not processed because they were excluded by the `WHERE clause of the COPY command.

1.5. Dynamic Tracing

IvorySQL provides facilities to support dynamic tracing of the database server. This allows an external utility to be called at specific points in the code and thereby trace execution.

A number of probes or trace points are already inserted into the source code. These probes are intended to be used by database developers and administrators. By default the probes are not compiled into IvorySQL; the user needs to explicitly tell the configure script to make the probes available.

Currently, the DTrace utility is supported, which, at the time of this writing, is available on Solaris, macOS, FreeBSD, NetBSD, and Oracle Linux. The SystemTap project for Linux provides a DTrace equivalent and can also be used. Supporting other dynamic tracing utilities is theoretically possible by changing the definitions for the macros in src/include/utils/probes.h.

1.5.1. Compiling for Dynamic Tracing

By default, probes are not available, so you will need to explicitly tell the configure script to make the probes available in IvorySQL. To include DTrace support specify --enable-dtrace to configure.

1.5.2. Built-in Probes

A number of standard probes are provided in the source code, More probes can certainly be added to enhance IvorySQL’s observability.

Built-in DTrace Probes

Name

Parameters

Description

transaction-start

(LocalTransactionId)

Probe that fires at the start of a new transaction. arg0 is the transaction ID.

transaction-commit

(LocalTransactionId)

Probe that fires when a transaction completes successfully. arg0 is the transaction ID.

transaction-abort

(LocalTransactionId)

Probe that fires when a transaction completes unsuccessfully. arg0 is the transaction ID.

query-start

(const char *)

Probe that fires when the processing of a query is started. arg0 is the query string.

query-done

(const char *)

Probe that fires when the processing of a query is complete. arg0 is the query string.

query-parse-start

(const char *)

Probe that fires when the parsing of a query is started. arg0 is the query string.

query-parse-done

(const char *)

Probe that fires when the parsing of a query is complete. arg0 is the query string.

query-rewrite-start

(const char *)

Probe that fires when the rewriting of a query is started. arg0 is the query string.

query-rewrite-done

(const char *)

Probe that fires when the rewriting of a query is complete. arg0 is the query string.

query-plan-start

()

Probe that fires when the planning of a query is started.

query-plan-done

()

Probe that fires when the planning of a query is complete.

query-execute-start

()

Probe that fires when the execution of a query is started.

query-execute-done

()

Probe that fires when the execution of a query is complete.

statement-status

(const char *)

Probe that fires anytime the server process updates its pg_stat_activity.status. arg0 is the new status string.

checkpoint-start

(int)

Probe that fires when a checkpoint is started. arg0 holds the bitwise flags used to distinguish different checkpoint types, such as shutdown, immediate or force.

checkpoint-done

(int, int, int, int, int)

Probe that fires when a checkpoint is complete. (The probes listed next fire in sequence during checkpoint processing.) arg0 is the number of buffers written. arg1 is the total number of buffers. arg2, arg3 and arg4 contain the number of WAL files added, removed and recycled respectively.

clog-checkpoint-start

(bool)

Probe that fires when the CLOG portion of a checkpoint is started. arg0 is true for normal checkpoint, false for shutdown checkpoint.

clog-checkpoint-done

(bool)

Probe that fires when the CLOG portion of a checkpoint is complete. arg0 has the same meaning as for clog-checkpoint-start.

subtrans-checkpoint-start

(bool)

Probe that fires when the SUBTRANS portion of a checkpoint is started. arg0 is true for normal checkpoint, false for shutdown checkpoint.

subtrans-checkpoint-done

(bool)

Probe that fires when the SUBTRANS portion of a checkpoint is complete. arg0 has the same meaning as for subtrans-checkpoint-start.

multixact-checkpoint-start

(bool)

Probe that fires when the MultiXact portion of a checkpoint is started. arg0 is true for normal checkpoint, false for shutdown checkpoint.

multixact-checkpoint-done

(bool)

Probe that fires when the MultiXact portion of a checkpoint is complete. arg0 has the same meaning as for multixact-checkpoint-start.

buffer-checkpoint-start

(int)

Probe that fires when the buffer-writing portion of a checkpoint is started. arg0 holds the bitwise flags used to distinguish different checkpoint types, such as shutdown, immediate or force.

buffer-sync-start

(int, int)

Probe that fires when we begin to write dirty buffers during checkpoint (after identifying which buffers must be written). arg0 is the total number of buffers. arg1 is the number that are currently dirty and need to be written.

buffer-sync-written

(int)

Probe that fires after each buffer is written during checkpoint. arg0 is the ID number of the buffer.

buffer-sync-done

(int, int, int)

Probe that fires when all dirty buffers have been written. arg0 is the total number of buffers. arg1 is the number of buffers actually written by the checkpoint process. arg2 is the number that were expected to be written (arg1 of buffer-sync-start); any difference reflects other processes flushing buffers during the checkpoint.

buffer-checkpoint-sync-start

()

Probe that fires after dirty buffers have been written to the kernel, and before starting to issue fsync requests.

buffer-checkpoint-done

()

Probe that fires when syncing of buffers to disk is complete.

twophase-checkpoint-start

()

Probe that fires when the two-phase portion of a checkpoint is started.

twophase-checkpoint-done

()

Probe that fires when the two-phase portion of a checkpoint is complete.

buffer-read-start

(ForkNumber, BlockNumber, Oid, Oid, Oid, int, bool)

Probe that fires when a buffer read is started. arg0 and arg1 contain the fork and block numbers of the page (but arg1 will be -1 if this is a relation extension request). arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation. arg5 is the ID of the backend which created the temporary relation for a local buffer, or InvalidBackendId (-1) for a shared buffer. arg6 is true for a relation extension request, false for normal read.

buffer-read-done

(ForkNumber, BlockNumber, Oid, Oid, Oid, int, bool, bool)

Probe that fires when a buffer read is complete. arg0 and arg1 contain the fork and block numbers of the page (if this is a relation extension request, arg1 now contains the block number of the newly added block). arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation. arg5 is the ID of the backend which created the temporary relation for a local buffer, or InvalidBackendId (-1) for a shared buffer. arg6 is true for a relation extension request, false for normal read. arg7 is true if the buffer was found in the pool, false if not.

buffer-flush-start

(ForkNumber, BlockNumber, Oid, Oid, Oid)

Probe that fires before issuing any write request for a shared buffer. arg0 and arg1 contain the fork and block numbers of the page. arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation.

buffer-flush-done

(ForkNumber, BlockNumber, Oid, Oid, Oid)

Probe that fires when a write request is complete. (Note that this just reflects the time to pass the data to the kernel; it’s typically not actually been written to disk yet.) The arguments are the same as for buffer-flush-start.

buffer-write-dirty-start

(ForkNumber, BlockNumber, Oid, Oid, Oid)

Probe that fires when a server process begins to write a dirty buffer. (If this happens often, it implies that shared_buffers is too small or the background writer control parameters need adjustment.) arg0 and arg1 contain the fork and block numbers of the page. arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation.

buffer-write-dirty-done

(ForkNumber, BlockNumber, Oid, Oid, Oid)

Probe that fires when a dirty-buffer write is complete. The arguments are the same as for buffer-write-dirty-start.

wal-buffer-write-dirty-start

()

Probe that fires when a server process begins to write a dirty WAL buffer because no more WAL buffer space is available. (If this happens often, it implies that wal_buffers is too small.)

wal-buffer-write-dirty-done

()

Probe that fires when a dirty WAL buffer write is complete.

wal-insert

(unsigned char, unsigned char)

Probe that fires when a WAL record is inserted. arg0 is the resource manager (rmid) for the record. arg1 contains the info flags.

wal-switch

()

Probe that fires when a WAL segment switch is requested.

smgr-md-read-start

(ForkNumber, BlockNumber, Oid, Oid, Oid, int)

Probe that fires when beginning to read a block from a relation. arg0 and arg1 contain the fork and block numbers of the page. arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation. arg5 is the ID of the backend which created the temporary relation for a local buffer, or InvalidBackendId (-1) for a shared buffer.

smgr-md-read-done

(ForkNumber, BlockNumber, Oid, Oid, Oid, int, int, int)

Probe that fires when a block read is complete. arg0 and arg1 contain the fork and block numbers of the page. arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation. arg5 is the ID of the backend which created the temporary relation for a local buffer, or InvalidBackendId (-1) for a shared buffer. arg6 is the number of bytes actually read, while arg7 is the number requested (if these are different it indicates trouble).

smgr-md-write-start

(ForkNumber, BlockNumber, Oid, Oid, Oid, int)

Probe that fires when beginning to write a block to a relation. arg0 and arg1 contain the fork and block numbers of the page. arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation. arg5 is the ID of the backend which created the temporary relation for a local buffer, or InvalidBackendId (-1) for a shared buffer.

smgr-md-write-done

(ForkNumber, BlockNumber, Oid, Oid, Oid, int, int, int)

Probe that fires when a block write is complete. arg0 and arg1 contain the fork and block numbers of the page. arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs identifying the relation. arg5 is the ID of the backend which created the temporary relation for a local buffer, or InvalidBackendId (-1) for a shared buffer. arg6 is the number of bytes actually written, while arg7 is the number requested (if these are different it indicates trouble).

sort-start

(int, bool, int, int, bool, int)

Probe that fires when a sort operation is started. arg0 indicates heap, index or datum sort. arg1 is true for unique-value enforcement. arg2 is the number of key columns. arg3 is the number of kilobytes of work memory allowed. arg4 is true if random access to the sort result is required. arg5 indicates serial when 0, parallel worker when 1, or parallel leader when 2.

sort-done

(bool, long)

Probe that fires when a sort is complete. arg0 is true for external sort, false for internal sort. arg1 is the number of disk blocks used for an external sort, or kilobytes of memory used for an internal sort.

lwlock-acquire

(char *, LWLockMode)

Probe that fires when an LWLock has been acquired. arg0 is the LWLock’s tranche. arg1 is the requested lock mode, either exclusive or shared.

lwlock-release

(char *)

Probe that fires when an LWLock has been released (but note that any released waiters have not yet been awakened). arg0 is the LWLock’s tranche.

lwlock-wait-start

(char *, LWLockMode)

Probe that fires when an LWLock was not immediately available and a server process has begun to wait for the lock to become available. arg0 is the LWLock’s tranche. arg1 is the requested lock mode, either exclusive or shared.

lwlock-wait-done

(char *, LWLockMode)

Probe that fires when a server process has been released from its wait for an LWLock (it does not actually have the lock yet). arg0 is the LWLock’s tranche. arg1 is the requested lock mode, either exclusive or shared.

lwlock-condacquire

(char *, LWLockMode)

Probe that fires when an LWLock was successfully acquired when the caller specified no waiting. arg0 is the LWLock’s tranche. arg1 is the requested lock mode, either exclusive or shared.

lwlock-condacquire-fail

(char *, LWLockMode)

Probe that fires when an LWLock was not successfully acquired when the caller specified no waiting. arg0 is the LWLock’s tranche. arg1 is the requested lock mode, either exclusive or shared.

lock-wait-start

(unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, LOCKMODE)

Probe that fires when a request for a heavyweight lock (lmgr lock) has begun to wait because the lock is not available. arg0 through arg3 are the tag fields identifying the object being locked. arg4 indicates the type of object being locked. arg5 indicates the lock type being requested.

lock-wait-done

(unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, LOCKMODE)

Probe that fires when a request for a heavyweight lock (lmgr lock) has finished waiting (i.e., has acquired the lock). The arguments are the same as for lock-wait-start.

deadlock-found

()

Probe that fires when a deadlock is found by the deadlock detector.

Defined Types Used in Probe Parameters

Type

Definition

LocalTransactionId

unsigned int

LWLockMode

int

LOCKMODE

int

BlockNumber

unsigned int

Oid

unsigned int

ForkNumber

int

bool

unsigned char

1.5.3. Using Probes

The example below shows a DTrace script for analyzing transaction counts in the system, as an alternative to snapshotting pg_stat_database before and after a performance test:

#!/usr/sbin/dtrace -qs

postgresql$1:::transaction-start
{
      @start["Start"] = count();
      self->ts  = timestamp;
}

postgresql$1:::transaction-abort
{
      @abort["Abort"] = count();
}

postgresql$1:::transaction-commit
/self->ts/
{
      @commit["Commit"] = count();
      @time["Total time (ns)"] = sum(timestamp - self->ts);
      self->ts=0;
}

When executed, the example D script gives output such as:

# ./txn_count.d `pgrep -n postgres` or ./txn_count.d <PID>
^C

Start                                          71
Commit                                         70
Total time (ns)                        2312105013
Note

SystemTap uses a different notation for trace scripts than DTrace does, even though the underlying trace points are compatible. One point worth noting is that at this writing, SystemTap scripts must reference probe names using double underscores in place of hyphens. This is expected to be fixed in future SystemTap releases.

1.5.4. Defining New Probes

New probes can be defined within the code wherever the developer desires, though this will require a recompilation. Below are the steps for inserting new probes:

  1. Decide on probe names and data to be made available through the probes

  2. Add the probe definitions to src/backend/utils/probes.d

  3. Include pg_trace.h if it is not already present in the module(s) containing the probe points, and insert TRACE_POSTGRESQL probe macros at the desired locations in the source code

  4. Recompile and verify that the new probes are available

Example: Here is an example of how you would add a probe to trace all new transactions by transaction ID.

  1. Decide that the probe will be named transaction-start and requires a parameter of type LocalTransactionId

  2. Add the probe definition to src/backend/utils/probes.d:

    ```
    probe transaction__start(LocalTransactionId);
    ```
    Note the use of the double underline in the probe name. In a DTrace script using the probe, the double underline needs to be replaced with a hyphen, so `transaction-start` is the name to document for users.
  3. At compile time, transaction__start is converted to a macro called TRACE_POSTGRESQL_TRANSACTION_START (notice the underscores are single here), which is available by including pg_trace.h. Add the macro call to the appropriate location in the source code. In this case, it looks like the following:

    ```
    TRACE_POSTGRESQL_TRANSACTION_START(vxid.localTransactionId);
    ```
  4. After recompiling and running the new binary, check that your newly added probe is available by executing the following DTrace command. You should see similar output:

    ```
    # dtrace -ln transaction-start
       ID    PROVIDER          MODULE           FUNCTION NAME
    18705 postgresql49878     postgres     StartTransactionCommand transaction-start
    18755 postgresql49877     postgres     StartTransactionCommand transaction-start
    18805 postgresql49876     postgres     StartTransactionCommand transaction-start
    18855 postgresql49875     postgres     StartTransactionCommand transaction-start
    18986 postgresql49873     postgres     StartTransactionCommand transaction-start
    ```

There are a few things to be careful about when adding trace macros to the C code:

  • You should take care that the data types specified for a probe’s parameters match the data types of the variables used in the macro. Otherwise, you will get compilation errors.

  • On most platforms, if IvorySQL is built with --enable-dtrace, the arguments to a trace macro will be evaluated whenever control passes through the macro, even if no tracing is being done. This is usually not worth worrying about if you are just reporting the values of a few local variables. But beware of putting expensive function calls into the arguments. If you need to do that, consider protecting the macro with a check to see if the trace is actually enabled:

    ```
    if (TRACE_POSTGRESQL_TRANSACTION_START_ENABLED())
        TRACE_POSTGRESQL_TRANSACTION_START(some_function(...));
    ```

Each trace macro has a corresponding ENABLED macro.

2. Monitoring Disk Usage

2.1. Determining Disk Usage

Each table has a primary heap disk file where most of the data is stored. If the table has any columns with potentially-wide values, there also might be a TOAST file associated with the table, which is used to store values too wide to fit comfortably in the main table . There will be one valid index on the TOAST table, if present. There also might be indexes associated with the base table. Each table and index is stored in a separate disk file — possibly more than one file, if the file would exceed one gigabyte.

You can monitor disk space in three ways: using the SQL functions, using the oid2name module, or using manual inspection of the system catalogs. The SQL functions are the easiest to use and are generally recommended. The remainder of this section shows how to do it by inspection of the system catalogs.

Using psql on a recently vacuumed or analyzed database, you can issue queries to see the disk usage of any table:

SELECT pg_relation_filepath(oid), relpages FROM pg_class WHERE relname = 'customer';

 pg_relation_filepath | relpages
----------------------+----------
 base/16384/16806     |       60
(1 row)

Each page is typically 8 kilobytes. (Remember, relpages is only updated by VACUUM, ANALYZE, and a few DDL commands such as CREATE INDEX.) The file path name is of interest if you want to examine the table’s disk file directly.

To show the space used by TOAST tables, use a query like the following:

SELECT relname, relpages
FROM pg_class,
     (SELECT reltoastrelid
      FROM pg_class
      WHERE relname = 'customer') AS ss
WHERE oid = ss.reltoastrelid OR
      oid = (SELECT indexrelid
             FROM pg_index
             WHERE indrelid = ss.reltoastrelid)
ORDER BY relname;

       relname        | relpages
----------------------+----------
 pg_toast_16806       |        0
 pg_toast_16806_index |        1

You can easily display index sizes, too:

SELECT c2.relname, c2.relpages
FROM pg_class c, pg_class c2, pg_index i
WHERE c.relname = 'customer' AND
      c.oid = i.indrelid AND
      c2.oid = i.indexrelid
ORDER BY c2.relname;

      relname      | relpages
-------------------+----------
 customer_id_index |       26

It is easy to find your largest tables and indexes using this information:

SELECT relname, relpages
FROM pg_class
ORDER BY relpages DESC;

       relname        | relpages
----------------------+----------
 bigtable             |     3290
 customer             |     3144

2.2. Disk Full Failure

The most important disk monitoring task of a database administrator is to make sure the disk doesn’t become full. A filled data disk will not result in data corruption, but it might prevent useful activity from occurring. If the disk holding the WAL files grows full, database server panic and consequent shutdown might occur.

If you cannot free up additional space on the disk by deleting other things, you can move some of the database files to other file systems by making use of tablespaces.

Tip

Some file systems perform badly when they are almost full, so do not wait until the disk is completely full to take action.

If your system supports per-user disk quotas, then the database will naturally be subject to whatever quota is placed on the user the server runs as. Exceeding the quota will have the same bad effects as running out of disk space entirely.