Tool Reference
1. List of tools
This part contains reference information for IvorySQL client applications and utilities. Not all of these commands are of general utility; some might require special privileges. The common feature of these applications is that they can be run on any host, independent of where the database server resides.
When specified on the command line, user and database names have their case preserved — the presence of spaces or special characters might require quoting. Table names and other identifiers do not have their case preserved, except where documented, and might require quoting.
category |
Tool name |
Description |
Client Applications |
clusterdb |
clusterdb is a utility for reclustering tables in a IvorySQL database. It finds tables that have previously been clustered, and clusters them again on the same index that was last used. Tables that have never been clustered are not affected.clusterdb is a wrapper around the SQL command CLUSTER. There is no effective difference between clustering databases via this utility and via other methods for accessing the server. |
createdb |
createdb creates a new IvorySQL database.Normally, the database user who executes this command becomes the owner of the new database. However, a different owner can be specified via the -O option, if the executing user has appropriate privileges.createdb is a wrapper around the SQL command CREATE DATABASE. There is no effective difference between creating databases via this utility and via other methods for accessing the server. |
|
createuser |
createuser creates a new IvorySQL user (or more precisely, a role). Only superusers and users with |
|
dropdb |
dropdb destroys an existing IvorySQL database. The user who executes this command must be a database superuser or the owner of the database.dropdb is a wrapper around the SQL command |
|
dropuser |
dropuser removes an existing IvorySQL user. Only superusers and users with the |
|
ecpg |
|
|
pg_amcheck |
pg_amcheck supports running amcheck's corruption checking functions against one or more databases, with options to select which schemas, tables and indexes to check, which kinds of checking to perform, and whether to perform the checks in parallel, and if so, the number of parallel connections to establish and use. |
|
pg_basebackup |
pg_basebackup is used to take a base backup of a running IvorySQL database cluster. The backup is taken without affecting other clients of the database, and can be used both for point-in-time recovery and as the starting point for a log-shipping or streaming-replication standby server.pg_basebackup makes an exact copy of the database cluster’s files, while making sure the server is put into and out of backup mode automatically. Backups are always taken of the entire database cluster; it is not possible to back up individual databases or database objects. For selective backups, another tool such as pg_dump must be used.The backup is made over a regular IvorySQL connection that uses the replication protocol. The connection must be made with a user ID that has |
|
pgbench |
pgbench is a simple program for running benchmark tests on IvorySQL. It runs the same sequence of SQL commands over and over, possibly in multiple concurrent database sessions, and then calculates the average transaction rate (transactions per second). By default, pgbench tests a scenario that is loosely based on TPC-B, involving five |
|
pg_config |
The pg_config utility prints configuration parameters of the currently installed version of IvorySQL. It is intended, for example, to be used by software packages that want to interface to IvorySQL to facilitate finding the required header files and libraries. |
|
pg_dump |
pg_dump is a utility for backing up a IvorySQL database. It makes consistent backups even if the database is being used concurrently. pg_dump does not block other users accessing the database (readers or writers).pg_dump only dumps a single database. To back up an entire cluster, or to back up global objects that are common to all databases in a cluster (such as roles and tablespaces), use pg_dumpall. Dumps can be output in script or archive file formats. Script dumps are plain-text files containing the SQL commands required to reconstruct the database to the state it was in at the time it was saved. To restore from such a script, feed it to psql. Script files can be used to reconstruct the database even on other machines and other architectures; with some modifications, even on other SQL database products.The alternative archive file formats must be used with pg_restore to rebuild the database. They allow pg_restore to be selective about what is restored, or even to reorder the items prior to being restored. The archive file formats are designed to be portable across architectures.When used with one of the archive file formats and combined with pg_restore, pg_dump provides a flexible archival and transfer mechanism. pg_dump can be used to backup an entire database, then pg_restore can be used to examine the archive and/or select which parts of the database are to be restored. The most flexible output file formats are the “custom” format ( |
|
pg_dumpall |
pg_dumpall is a utility for writing out (“dumping”) all IvorySQL databases of a cluster into one script file. The script file contains SQL commands that can be used as input to psql to restore the databases. It does this by calling pg_dump for each database in the cluster. pg_dumpall also dumps global objects that are common to all databases, namely database roles, tablespaces, and privilege grants for configuration parameters. (pg_dump does not save these objects.)Since pg_dumpall reads tables from all databases you will most likely have to connect as a database superuser in order to produce a complete dump. Also you will need superuser privileges to execute the saved script in order to be allowed to add roles and create databases.The SQL script will be written to the standard output. Use the |
|
pg_isready |
pg_isready is a utility for checking the connection status of a IvorySQL database server. The exit status specifies the result of the connection check. |
|
pg_receivewal |
pg_receivewal is used to stream the write-ahead log from a running IvorySQL cluster. The write-ahead log is streamed using the streaming replication protocol, and is written to a local directory of files. This directory can be used as the archive location for doing a restore using point-in-time recovery.pg_receivewal streams the write-ahead log in real time as it’s being generated on the server, and does not wait for segments to complete like archive_command and archive_library do. For this reason, it is not necessary to set archive_timeout when using pg_receivewal.Unlike the WAL receiver of a IvorySQL standby server, pg_receivewal by default flushes WAL data only when a WAL file is closed. The option |
|
pg_recvlogical |
|
|
pg_restore |
pg_restore is a utility for restoring a IvorySQL database from an archive created by [pg_dump](https://www.IvorySQL.org/docs/current/app-pgdump.html) in one of the non-plain-text formats. It will issue the commands necessary to reconstruct the database to the state it was in at the time it was saved. The archive files also allow pg_restore to be selective about what is restored, or even to reorder the items prior to being restored. The archive files are designed to be portable across architectures.pg_restore can operate in two modes. If a database name is specified, pg_restore connects to that database and restores archive contents directly into the database. Otherwise, a script containing the SQL commands necessary to rebuild the database is created and written to a file or standard output. This script output is equivalent to the plain text output format of pg_dump. Some of the options controlling the output are therefore analogous to pg_dump options.Obviously, pg_restore cannot restore information that is not present in the archive file. For instance, if the archive was made using the “dump data as |
|
pg_verifybackup |
pg_verifybackup is used to check the integrity of a database cluster backup taken using |
|
psql |
psql is a terminal-based front-end to IvorySQL. It enables you to type in queries interactively, issue them to IvorySQL, and see the query results. Alternatively, input can be from a file or from command line arguments. In addition, psql provides a number of meta-commands and various shell-like features to facilitate writing scripts and automating a wide variety of tasks. |
|
reindexdb |
reindexdb is a utility for rebuilding indexes in a IvorySQL database.reindexdb is a wrapper around the SQL command |
|
vacuumdb |
vacuumdb is a utility for cleaning a IvorySQL database. vacuumdb will also generate internal statistics used by the IvorySQL query optimizer.vacuumdb is a wrapper around the SQL command |
|
Server Applications |
initdb |
|
pg_archivecleanup |
pg_archivecleanup is designed to be used as an |
|
pg_checksums |
pg_checksums checks, enables or disables data checksums in a IvorySQL cluster. The server must be shut down cleanly before running pg_checksums. When verifying checksums, the exit status is zero if there are no checksum errors, and nonzero if at least one checksum failure is detected. When enabling or disabling checksums, the exit status is nonzero if the operation failed.When verifying checksums, every file in the cluster is scanned. When enabling checksums, each relation file block with a changed checksum is rewritten in-place. Disabling checksums only updates the file |
|
pg_controldata |
|
|
pg_ctl |
pg_ctl is a utility for initializing a IvorySQL database cluster, starting, stopping, or restarting the IvorySQL database server (postgres), or displaying the status of a running server. Although the server can be started manually, pg_ctl encapsulates tasks such as redirecting log output and properly detaching from the terminal and process group. It also provides convenient options for controlled shutdown. |
|
pg_resetwal |
|
|
pg_rewind |
pg_rewind is a tool for synchronizing a IvorySQL cluster with another copy of the same cluster, after the clusters' timelines have diverged. A typical scenario is to bring an old primary server back online after failover as a standby that follows the new primary.After a successful rewind, the state of the target data directory is analogous to a base backup of the source data directory. Unlike taking a new base backup or using a tool like rsync, pg_rewind does not require comparing or copying unchanged relation blocks in the cluster. Only changed blocks from existing relation files are copied; all other files, including new relation files, configuration files, and WAL segments, are copied in full. As such the rewind operation is significantly faster than other approaches when the database is large and only a small fraction of blocks differ between the clusters.pg_rewind examines the timeline histories of the source and target clusters to determine the point where they diverged, and expects to find WAL in the target cluster’s |
|
pg_test_fsync |
pg_test_fsync is intended to give you a reasonable idea of what the fastest wal_sync_method is on your specific system, as well as supplying diagnostic information in the event of an identified I/O problem. However, differences shown by pg_test_fsync might not make any significant difference in real database throughput, especially since many database servers are not speed-limited by their write-ahead logs. pg_test_fsync reports average file sync operation time in microseconds for each |
|
pg_test_timing |
pg_test_timing is a tool to measure the timing overhead on your system and confirm that the system time never moves backwards. Systems that are slow to collect timing data can give less accurate |
|
pg_upgrade |
Major IvorySQL releases regularly add new features that often change the layout of the system tables, but the internal data storage format rarely changes. pg_upgrade uses this fact to perform rapid upgrades by creating new system tables and simply reusing the old user data files. If a future major release ever changes the data storage format in a way that makes the old data format unreadable, pg_upgrade will not be usable for such upgrades. (The community will attempt to avoid such situations.)pg_upgrade does its best to make sure the old and new clusters are binary-compatible, e.g., by checking for compatible compile-time settings, including 32/64-bit binaries. It is important that any external modules are also binary compatible, though this cannot be checked by pg_upgrade. |
|
pg_waldump |
|
|
postgres |
|
2. Client Applications
2.1. clusterdb
2.1.1. Synopsis
clusterdb
[connection-option
…] [ --verbose
| -v
] [ --table
| -t
table
] … [dbname
]
clusterdb` [*`connection-option`*...] [ `--verbose` | `-v` ] `--all` | `-a
2.1.2. Options
clusterdb accepts the following command-line arguments:
-
-a
--all
Cluster all databases.
-
[-d] dbname
dbname
Specifies the name of the database to be clustered, when -a
/ --all
is not used. If this is not specified, the database name is read from the environment variable PGDATABASE
. If that is not set, the user name specified for the connection is used. The dbname
can be a connection string. If so, connection string parameters will override any conflicting command line options.
-
-e
--echo
Echo the commands that clusterdb generates and sends to the server.
-
-q
--quiet
Do not display progress messages.
-
-t table
--table=table
Cluster table
only. Multiple tables can be clustered by writing multiple -t
switches.
-
-v
--verbose
Print detailed information during processing.
-
-V
--version
Print the clusterdb version and exit.
-
-?
--help
Show help about clusterdb command line arguments, and exit.clusterdb also accepts the following command-line arguments for connection parameters:
-
-h host
--host=host
Specifies the host name of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the Unix domain socket.
-
-p port
--port=port
Specifies the TCP port or local Unix domain socket file extension on which the server is listening for connections.
-
-U username
--username=username
User name to connect as.
-
-w
--no-password
Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a .pgpass
file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.
-
-W
--password
Force clusterdb to prompt for a password before connecting to a database.This option is never essential, since clusterdb will automatically prompt for a password if the server demands password authentication. However, clusterdb will waste a connection attempt finding out that the server wants a password. In some cases it is worth typing -W
to avoid the extra connection attempt.
-
--maintenance-db=dbname
Specifies the name of the database to connect to to discover which databases should be clustered, when -a
/ --all
is used. If not specified, the postgres
database will be used, or if that does not exist, template1
will be used. This can be a connection string. If so, connection string parameters will override any conflicting command line options. Also, connection string parameters other than the database name itself will be re-used when connecting to other databases.
2.1.3. Environment
-
PGDATABASE
PGHOST
PGPORT
PGUSER
Default connection parameters
-
PG_COLOR
Specifies whether to use color in diagnostic messages. Possible values are always
, auto
and never
.
This utility, like most other IvorySQL utilities, also uses the environment variables supported by libpq
2.2. createdb
createdb — create a new IvorySQL database
2.2.2. Options
createdb accepts the following command-line arguments:
-
dbname
Specifies the name of the database to be created. The name must be unique among all IvorySQL databases in this cluster. The default is to create a database with the same name as the current system user.
-
description
Specifies a comment to be associated with the newly created database.
-
-D tablespace
--tablespace=tablespace
Specifies the default tablespace for the database. (This name is processed as a double-quoted identifier.)
-
-e
--echo
Echo the commands that createdb generates and sends to the server.
-
-E encoding
--encoding=encoding
Specifies the character encoding scheme to be used in this database.
-
-l locale
--locale=locale
Specifies the locale to be used in this database. This is equivalent to specifying both --lc-collate
and --lc-ctype
.
-
--lc-collate=`locale
`
Specifies the LC_COLLATE setting to be used in this database.
-
--lc-ctype=locale
Specifies the LC_CTYPE setting to be used in this database.
-
--icu-locale=`locale
`
Specifies the ICU locale ID to be used in this database, if the ICU locale provider is selected.
-
--locale-provider={libc|icu}
Specifies the locale provider for the database’s default collation.
-
-O `owner
`--owner=`owner
`
Specifies the database user who will own the new database. (This name is processed as a double-quoted identifier.)
-
-S template
--strategy=strategy
Specifies the database creation strategy. See CREATE DATABASE STRATEGY for more details.
-
-T template
--template=template
Specifies the template database from which to build this database. (This name is processed as a double-quoted identifier.)
-
-V
--version
Print the createdb version and exit.
-
-?
--help
Show help about createdb command line arguments, and exit.
The options -D
, -l
, -E
, -O
, and -T
correspond to options of the underlying SQL command CREATE DATABASE
; see there for more information about them.
createdb also accepts the following command-line arguments for connection parameters:
-
-h host
--host=host
Specifies the host name of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the Unix domain socket.
-
-p port
--port=port
Specifies the TCP port or the local Unix domain socket file extension on which the server is listening for connections.
-
-U username
--username=username
User name to connect as.
-
-w
--no-password
Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a .pgpass
file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.
-
-W
--password
Force createdb to prompt for a password before connecting to a database.This option is never essential, since createdb will automatically prompt for a password if the server demands password authentication. However, createdb will waste a connection attempt finding out that the server wants a password. In some cases it is worth typing -W
to avoid the extra connection attempt.
-
--maintenance-db=`dbname
`
Specifies the name of the database to connect to when creating the new database. If not specified, the postgres
database will be used; if that does not exist (or if it is the name of the new database being created), template1
will be used. This can be a connection string. If so, connection string parameters will override any conflicting command line options.
2.2.3. Environment
-
PGDATABASE
If set, the name of the database to create, unless overridden on the command line.
-
PGHOST
PGPORT
PGUSER
Default connection parameters. PGUSER
also determines the name of the database to create, if it is not specified on the command line or by PGDATABASE
.
-
PG_COLOR
Specifies whether to use color in diagnostic messages. Possible values are always
, auto
and never
.
This utility, like most other IvorySQL utilities, also uses the environment variables supported by libpq
2.2.4. Diagnostics
In case of difficulty, see CREATE DATABASE and psql for discussions of potential problems and error messages. The database server must be running at the targeted host. Also, any default connection settings and environment variables used by the libpq front-end library will apply.
2.2.5. Examples
To create the database demo
using the default database server:
$ createdb demo
To create the database demo
using the server on host eden
, port 5000, using the template0
template database, here is the command-line command and the underlying SQL command:
$ createdb -p 5000 -h eden -T template0 -e demo
CREATE DATABASE demo TEMPLATE template0;
2.3. createuser
createuser — define a new IvorySQL user account
2.3.1. Synopsis
createuser
[connection-option
…] [option
…] [username
]
Description
createuser creates a new IvorySQL user (or more precisely, a role). Only superusers and users with CREATEROLE
privilege can create new users, so createuser must be invoked by someone who can connect as a superuser or a user with CREATEROLE
privilege.
If you wish to create a role with the SUPERUSER
, REPLICATION
, or BYPASSRLS
privilege, you must connect as a superuser, not merely with CREATEROLE
privilege. Being a superuser implies the ability to bypass all access permission checks within the database, so superuser access should not be granted lightly. CREATEROLE
also conveys very extensive privileges.
createuser is a wrapper around the SQL command CREATE ROLE
. There is no effective difference between creating users via this utility and via other methods for accessing the server.
2.3.2. Options
createuser accepts the following command-line arguments:
-
username
Specifies the name of the IvorySQL user to be created.
-
-c number
--connection-limit=number
Set a maximum number of connections for the new user. The default is to set no limit.
-
-d
--createdb
The new user will be allowed to create databases.
-
-D
--no-createdb
The new user will not be allowed to create databases. This is the default.
-
-e
--echo
Echo the commands that createuser generates and sends to the server.
-
-E
--encrypted
This option is obsolete but still accepted for backward compatibility.
-
-g role
--role=role
Indicates role to which this role will be added immediately as a new member. Multiple roles to which this role will be added as a member can be specified by writing multiple -g
switches.
-
-i
--inherit
The new role will automatically inherit privileges of roles it is a member of. This is the default.
-
-I
--no-inherit
The new role will not automatically inherit privileges of roles it is a member of.
-
--interactive
Prompt for the user name if none is specified on the command line, and also prompt for whichever of the options -d
/ -D
, -r
/ -R
, -s
/ -S
is not specified on the command line.
-
-l
--login
The new user will be allowed to log in (that is, the user name can be used as the initial session user identifier). This is the default.
-
-L
--no-login
The new user will not be allowed to log in. (A role without login privilege is still useful as a means of managing database permissions.)
-
-P
--pwprompt
If given, createuser will issue a prompt for the password of the new user. This is not necessary if you do not plan on using password authentication.
-
-r
--createrole
The new user will be allowed to create, alter, drop, comment on, change the security label for, and grant or revoke membership in other roles; that is, this user will have CREATEROLE
privilege. See role creation for more details about what capabilities are conferred by this privilege.
-
-R
--no-createrole
The new user will not be allowed to create new roles. This is the default.
-
-s
--superuser
The new user will be a superuser.
-
-S
--no-superuser
The new user will not be a superuser. This is the default.
-
-V
--version
Print the createuser version and exit.
-
--replication
The new user will have the REPLICATION
privilege, which is described more fully in the documentation for CREATE ROLE.
-
--no-replication
The new user will not have the REPLICATION
privilege, which is described more fully in the documentation for CREATE ROLE.
-
-?
--help
Show help about createuser command line arguments, and exit.
createuser also accepts the following command-line arguments for connection parameters:
-
-h host
--host=host
Specifies the host name of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the Unix domain socket.
-
-p port
--port=port
Specifies the TCP port or local Unix domain socket file extension on which the server is listening for connections.
-
-U username
--username=username
User name to connect as (not the user name to create).
-
-w
--no-password
Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a .pgpass
file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.
-
-W
--password
Force createuser to prompt for a password (for connecting to the server, not for the password of the new user).This option is never essential, since createuser will automatically prompt for a password if the server demands password authentication. However, createuser will waste a connection attempt finding out that the server wants a password. In some cases it is worth typing -W
to avoid the extra connection attempt.
2.3.3. Environment
-
PGHOST
PGPORT
PGUSER
Default connection parameters
-
PG_COLOR
Specifies whether to use color in diagnostic messages. Possible values are always
, auto
and never
.
This utility, like most other IvorySQL utilities, also uses the environment variables supported by libpq
2.3.4. Diagnostics
In case of difficulty, see CREATE ROLE and psql for discussions of potential problems and error messages. The database server must be running at the targeted host. Also, any default connection settings and environment variables used by the libpq front-end library will apply.
2.3.5. Examples
To create a user joe
on the default database server:
$ createuser joe
To create a user joe
on the default database server with prompting for some additional attributes:
$ createuser --interactive joe
Shall the new role be a superuser? (y/n) n
Shall the new role be allowed to create databases? (y/n) n
Shall the new role be allowed to create more new roles? (y/n) n
To create the same user joe
using the server on host eden
, port 5000, with attributes explicitly specified, taking a look at the underlying command:
$ createuser -h eden -p 5000 -S -D -R -e joe
CREATE ROLE joe NOSUPERUSER NOCREATEDB NOCREATEROLE INHERIT LOGIN;
To create the user joe
as a superuser, and assign a password immediately:
$ createuser -P -s -e joe
Enter password for new role: xyzzy
Enter it again: xyzzy
CREATE ROLE joe PASSWORD 'md5b5f5ba1a423792b526f799ae4eb3d59e' SUPERUSER CREATEDB CREATEROLE INHERIT LOGIN;
In the above example, the new password isn’t actually echoed when typed, but we show what was typed for clarity. As you see, the password is encrypted before it is sent to the client.
2.4. dropdb
dropdb — remove a IvorySQL database
2.4.2. Options
dropdb accepts the following command-line arguments:
-
dbname
Specifies the name of the database to be removed.
-
-e
--echo
Echo the commands that dropdb generates and sends to the server.
-
-f
--force
Attempt to terminate all existing connections to the target database before dropping it. See DROP DATABASE for more information on this option.
-
-i
--interactive
Issues a verification prompt before doing anything destructive.
-
-V
--version
Print the dropdb version and exit.
-
--if-exists
Do not throw an error if the database does not exist. A notice is issued in this case.
-
-?
--help
Show help about dropdb command line arguments, and exit.
dropdb also accepts the following command-line arguments for connection parameters:
-
-h host
--host=host
Specifies the host name of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the Unix domain socket.
-
-p port
--port=port
Specifies the TCP port or local Unix domain socket file extension on which the server is listening for connections.
-
-U username
--username=username
User name to connect as.
-
-w
--no-password
Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a .pgpass
file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.
-
-W
--password
Force dropdb to prompt for a password before connecting to a database.This option is never essential, since dropdb will automatically prompt for a password if the server demands password authentication. However, dropdb will waste a connection attempt finding out that the server wants a password. In some cases it is worth typing -W
to avoid the extra connection attempt.
-
--maintenance-db=dbname
Specifies the name of the database to connect to in order to drop the target database. If not specified, the postgres
database will be used; if that does not exist (or is the database being dropped), template1
will be used. This can be a connection string. If so, connection string parameters will override any conflicting command line options.
2.4.3. Environment
-
PGHOST
PGPORT
PGUSER
Default connection parameters
-
PG_COLOR
Specifies whether to use color in diagnostic messages. Possible values are always
, auto
and never
.
This utility, like most other IvorySQL utilities, also uses the environment variables supported by libpq .
2.4.4. Diagnostics
In case of difficulty, see DROP DATABASE and psql for discussions of potential problems and error messages. The database server must be running at the targeted host. Also, any default connection settings and environment variables used by the libpq front-end library will apply.
2.4.5. Examples
To destroy the database demo
on the default database server:
$ dropdb demo
To destroy the database demo
using the server on host eden
, port 5000, with verification and a peek at the underlying command:
$ dropdb -p 5000 -h eden -i -e demo
Database "demo" will be permanently deleted.
Are you sure? (y/n) y
DROP DATABASE demo;
2.5. dropuser
dropuser — remove a IvorySQL user account
2.5.2. Options
dropuser accepts the following command-line arguments:
-
username
Specifies the name of the IvorySQL user to be removed. You will be prompted for a name if none is specified on the command line and the -i
/ --interactive
option is used.
-
-e
--echo
Echo the commands that dropuser generates and sends to the server.
-
-i
--interactive
Prompt for confirmation before actually removing the user, and prompt for the user name if none is specified on the command line.
-
-V
--version
Print the dropuser version and exit.
-
--if-exists
Do not throw an error if the user does not exist. A notice is issued in this case.
-
-?
--help
Show help about dropuser command line arguments, and exit.
dropuser also accepts the following command-line arguments for connection parameters:
-
-h host
--host=host
Specifies the host name of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the Unix domain socket.
-
-p port
--port=port
Specifies the TCP port or local Unix domain socket file extension on which the server is listening for connections.
-
-U username
--username=username
User name to connect as (not the user name to drop).
-
-w
--no-password
Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a .pgpass
file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.
-
-W
--password
Force dropuser to prompt for a password before connecting to a database.This option is never essential, since dropuser will automatically prompt for a password if the server demands password authentication. However, dropuser will waste a connection attempt finding out that the server wants a password. In some cases it is worth typing -W
to avoid the extra connection attempt.
2.5.3. Environment
-
PGHOST
PGPORT
PGUSER
Default connection parameters
-
PG_COLOR
Specifies whether to use color in diagnostic messages. Possible values are always
, auto
and never
.
This utility, like most other IvorySQL utilities, also uses the environment variables supported by libpq
2.5.5. Examples
To remove user joe
from the default database server:
$ dropuser joe
To remove user joe
using the server on host eden
, port 5000, with verification and a peek at the underlying command:
$ dropuser -p 5000 -h eden -i -e joe
Role "joe" will be permanently removed.
Are you sure? (y/n) y
DROP ROLE joe;
2.6. ecpg
ecpg — embedded SQL C preprocessor
2.6.2. Options
ecpg
accepts the following command-line arguments:
-
-c
Automatically generate certain C code from SQL code. Currently, this works for EXEC SQL TYPE
.
-
-C mode
Set a compatibility mode. mode
can be INFORMIX
, INFORMIX_SE
, or ORACLE
.
-
-D `symbol
`
Define a C preprocessor symbol.
-
-h
Process header files. When this option is specified, the output file extension becomes .h
not .c
, and the default input file extension is .pgh
not .pgc
. Also, the -c
option is forced on.
-
-i
Parse system include files as well.
-
-I `directory
`
Specify an additional include path, used to find files included via EXEC SQL INCLUDE
. Defaults are .
(current directory), /usr/local/include
, the IvorySQL include directory which is defined at compile time (default: /usr/local/pgsql/include
), and /usr/include
, in that order.
-
-o `filename
`
Specifies that ecpg
should write all its output to the given filename
. Write -o -
to send all output to standard output.
-
-r `option
`
Selects run-time behavior. Option
can be one of the following:`no_indicator`Do not use indicators but instead use special values to represent null values. Historically there have been databases using this approach.prepare`Prepare all statements before using them. Libecpg will keep a cache of prepared statements and reuse a statement if it gets executed again. If the cache runs full, libecpg will free the least used statement.`questionmarks
Allow question mark as placeholder for compatibility reasons. This used to be the default long ago.
-
-t
Turn on autocommit of transactions. In this mode, each SQL command is automatically committed unless it is inside an explicit transaction block. In the default mode, commands are committed only when EXEC SQL COMMIT
is issued.
-
-v
Print additional information including the version and the "include" path.
-
--version
Print the ecpg version and exit.
-
-?
--help
Show help about ecpg command line arguments, and exit.
2.6.3. Notes
When compiling the preprocessed C code files, the compiler needs to be able to find the ECPG header files in the IvorySQL include directory. Therefore, you might have to use the -I
option when invoking the compiler (e.g., -I/usr/local/pgsql/include
).
Programs using C code with embedded SQL have to be linked against the libecpg
library, for example using the linker options -L/usr/local/pgsql/lib -lecpg
.
The value of either of these directories that is appropriate for the installation can be found out using pg_config.
2.7. pg_amcheck
pg_amcheck — checks for corruption in one or more IvorySQL databases
2.7.2. Options
The following command-line options control what is checked:
-
-a
--all
Check all databases, except for any excluded via --exclude-database
.
-
-d pattern
--database=pattern
Check databases matching the specified pattern
, except for any excluded by --exclude-database
. This option can be specified more than once.
-
-D pattern
--exclude-database=pattern
Exclude databases matching the given pattern
. This option can be specified more than once.
-
-i pattern
--index=pattern
Check indexes matching the specified pattern
, unless they are otherwise excluded. This option can be specified more than once.This is similar to the --relation
option, except that it applies only to indexes, not to other relation types.
-
-I pattern
--exclude-index=pattern
Exclude indexes matching the specified pattern
. This option can be specified more than once.This is similar to the --exclude-relation
option, except that it applies only to indexes, not other relation types.
-
-r `pattern
`--relation=`pattern
`
Check relations matching the specified pattern
, unless they are otherwise excluded. This option can be specified more than once.Patterns may be unqualified, e.g. myrel*
, or they may be schema-qualified, e.g. myschema*.myrel*
or database-qualified and schema-qualified, e.g. mydb*.myscheam*.myrel*
. A database-qualified pattern will add matching databases to the list of databases to be checked.
-
-R pattern
--exclude-relation=pattern
Exclude relations matching the specified pattern
. This option can be specified more than once.As with --relation
, the pattern
may be unqualified, schema-qualified, or database- and schema-qualified.
-
-s pattern
--schema=pattern
Check tables and indexes in schemas matching the specified pattern
, unless they are otherwise excluded. This option can be specified more than once.To select only tables in schemas matching a particular pattern, consider using something like --table=SCHEMAPAT.* --no-dependent-indexes
. To select only indexes, consider using something like --index=SCHEMAPAT.
.A schema pattern may be database-qualified. For example, you may write --schema=mydb.myschema*
to select schemas matching myschema*
in databases matching mydb*
.
-
-S pattern
--exclude-schema=pattern
Exclude tables and indexes in schemas matching the specified pattern
. This option can be specified more than once.As with --schema
, the pattern may be database-qualified.
-
-t pattern
--table=pattern
Check tables matching the specified pattern
, unless they are otherwise excluded. This option can be specified more than once.This is similar to the --relation
option, except that it applies only to tables, materialized views, and sequences, not to indexes.
-
-T pattern
--exclude-table=pattern
Exclude tables matching the specified pattern
. This option can be specified more than once.This is similar to the --exclude-relation
option, except that it applies only to tables, materialized views, and sequences, not to indexes.
-
--no-dependent-indexes
By default, if a table is checked, any btree indexes of that table will also be checked, even if they are not explicitly selected by an option such as --index
or --relation
. This option suppresses that behavior.
-
--no-dependent-toast
By default, if a table is checked, its toast table, if any, will also be checked, even if it is not explicitly selected by an option such as --table
or --relation
. This option suppresses that behavior.
-
--no-strict-names
By default, if an argument to --database
, --table
, --index
, or --relation
matches no objects, it is a fatal error. This option downgrades that error to a warning.
The following command-line options control checking of tables:
-
--exclude-toast-pointers
By default, whenever a toast pointer is encountered in a table, a lookup is performed to ensure that it references apparently-valid entries in the toast table. These checks can be quite slow, and this option can be used to skip them.
-
--on-error-stop
After reporting all corruptions on the first page of a table where corruption is found, stop processing that table relation and move on to the next table or index.Note that index checking always stops after the first corrupt page. This option only has meaning relative to table relations.
-
--skip=`option
`
If all-frozen
is given, table corruption checks will skip over pages in all tables that are marked as all frozen.If all-visible
is given, table corruption checks will skip over pages in all tables that are marked as all visible.By default, no pages are skipped. This can be specified as none
, but since this is the default, it need not be mentioned.
-
--startblock=`block
`
Start checking at the specified block number. An error will occur if the table relation being checked has fewer than this number of blocks. This option does not apply to indexes, and is probably only useful when checking a single table relation. See --endblock
for further caveats.
-
--endblock=`block
`
End checking at the specified block number. An error will occur if the table relation being checked has fewer than this number of blocks. This option does not apply to indexes, and is probably only useful when checking a single table relation. If both a regular table and a toast table are checked, this option will apply to both, but higher-numbered toast blocks may still be accessed while validating toast pointers, unless that is suppressed using --exclude-toast-pointers
.
The following command-line options control checking of B-tree indexes:
-
--heapallindexed
For each index checked, verify the presence of all heap tuples as index tuples in the index using amcheck's heapallindexed
option.
-
--parent-check
For each btree index checked, use amcheck's bt_index_parent_check
function, which performs additional checks of parent/child relationships during index checking.The default is to use amcheck’s bt_index_check
function, but note that use of the --rootdescend
option implicitly selects bt_index_parent_check
.
-
--rootdescend
For each index checked, re-find tuples on the leaf level by performing a new search from the root page for each tuple using amcheck's rootdescend
option.Use of this option implicitly also selects the --parent-check
option.This form of verification was originally written to help in the development of btree index features. It may be of limited use or even of no use in helping detect the kinds of corruption that occur in practice. It may also cause corruption checking to take considerably longer and consume considerably more resources on the server.
2.7.3. Warning
The extra checks performed against B-tree indexes when the --parent-check
option or the --rootdescend
option is specified require relatively strong relation-level locks. These checks are the only checks that will block concurrent data modification from INSERT
, UPDATE
, and DELETE
commands.
The following command-line options control the connection to the server:
-
-h hostname
--host=hostname
Specifies the host name of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the Unix domain socket.
-
-p port
--port=port
Specifies the TCP port or local Unix domain socket file extension on which the server is listening for connections.
-
-U
--username=username
User name to connect as.
-
-w
--no-password
Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a .pgpass
file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.
-
-W
--password
Force pg_amcheck to prompt for a password before connecting to a database.This option is never essential, since pg_amcheck will automatically prompt for a password if the server demands password authentication. However, pg_amcheck will waste a connection attempt finding out that the server wants a password. In some cases it is worth typing -W
to avoid the extra connection attempt.
-
--maintenance-db=dbname
Specifies a database or connection string to be used to discover the list of databases to be checked. If neither --all
nor any option including a database pattern is used, no such connection is required and this option does nothing. Otherwise, any connection string parameters other than the database name which are included in the value for this option will also be used when connecting to the databases being checked. If this option is omitted, the default is postgres
or, if that fails, template1
.
Other options are also available:
-
-e
--echo
Echo to stdout all SQL sent to the server.
-
-j `num
`--jobs=`num
`
Use num
concurrent connections to the server, or one per object to be checked, whichever is less.The default is to use a single connection.
-
-P
--progress
Show progress information. Progress information includes the number of relations for which checking has been completed, and the total size of those relations. It also includes the total number of relations that will eventually be checked, and the estimated size of those relations.
-
-v
--verbose
Print more messages. In particular, this will print a message for each relation being checked, and will increase the level of detail shown for server errors.
-
-V
--version
Print the pg_amcheck version and exit.
-
--install-missing
--install-missing=`schema
`
Install any missing extensions that are required to check the database(s). If not yet installed, each extension’s objects will be installed into the given schema
, or if not specified into schema pg_catalog
.At present, the only required extension is amcheck.
-
-?
--help
Show help about pg_amcheck command line arguments, and exit.
2.8. pg_basebackup
pg_basebackup — take a base backup of a IvorySQL cluster
2.8.2. Options
The following command-line options control the location and format of the output:
-
-D directory
--pgdata=directory
Sets the target directory to write the output to. pg_basebackup will create this directory (and any missing parent directories) if it does not exist. If it already exists, it must be empty.When the backup is in tar format, the target directory may be specified as -
(dash), causing the tar file to be written to stdout
.This option is required.
-
-F format
--format=format
Selects the format for the output. format
can be one of the following: p
plain
Write the output as plain files, with the same layout as the source server’s data directory and tablespaces. When the cluster has no additional tablespaces, the whole database will be placed in the target directory. If the cluster contains additional tablespaces, the main data directory will be placed in the target directory, but all other tablespaces will be placed in the same absolute path as they have on the source server. (See --tablespace-mapping
to change that.)This is the default format. t
tar
Write the output as tar files in the target directory. The main data directory’s contents will be written to a file named base.tar
, and each other tablespace will be written to a separate tar file named after that tablespace’s OID.If the target directory is specified as -
(dash), the tar contents will be written to standard output, suitable for piping to (for example) gzip. This is only allowed if the cluster has no additional tablespaces and WAL streaming is not used.
-
-R
--write-recovery-conf
Creates a standby.signal
file and appends connection settings to the IvorySQL.auto.conf
file in the target directory (or within the base archive file when using tar format). This eases setting up a standby server using the results of the backup.The IvorySQL.auto.conf
file will record the connection settings and, if specified, the replication slot that pg_basebackup is using, so that streaming replication will use the same settings later on.
-
-t target
--target=target
Instructs the server where to place the base backup. The default target is client
, which specifies that the backup should be sent to the machine where pg_basebackup is running. If the target is instead set to server:/some/path
, the backup will be stored on the machine where the server is running in the /some/path
directory. Storing a backup on the server requires superuser privileges or having privileges of the pg_write_server_files
role. If the target is set to blackhole
, the contents are discarded and not stored anywhere. This should only be used for testing purposes, as you will not end up with an actual backup.Since WAL streaming is implemented by pg_basebackup rather than by the server, this option cannot be used together with -Xstream
. Since that is the default, when this option is specified, you must also specify either -Xfetch
or -Xnone
.
-
-T olddir=newdir
--tablespace-mapping=olddir=newdir
Relocates the tablespace in directory olddir
to newdir
during the backup. To be effective, olddir
must exactly match the path specification of the tablespace as it is defined on the source server. (But it is not an error if there is no tablespace in olddir
on the source server.) Meanwhile newdir
is a directory in the receiving host’s filesystem. As with the main target directory, newdir
need not exist already, but if it does exist it must be empty. Both olddir
and newdir
must be absolute paths. If either path needs to contain an equal sign (=
), precede that with a backslash. This option can be specified multiple times for multiple tablespaces.If a tablespace is relocated in this way, the symbolic links inside the main data directory are updated to point to the new location. So the new data directory is ready to be used for a new server instance with all tablespaces in the updated locations.Currently, this option only works with plain output format; it is ignored if tar format is selected.
-
--waldir=waldir
Sets the directory to write WAL (write-ahead log) files to. By default WAL files will be placed in the pg_wal
subdirectory of the target directory, but this option can be used to place them elsewhere. waldir
must be an absolute path. As with the main target directory, waldir
need not exist already, but if it does exist it must be empty. This option can only be specified when the backup is in plain format.
-
-X method
--wal-method=method
Includes the required WAL (write-ahead log) files in the backup. This will include all write-ahead logs generated during the backup. Unless the method none
is specified, it is possible to start a postmaster in the target directory without the need to consult the log archive, thus making the output a completely standalone backup.The following methods
for collecting the write-ahead logs are supported: n
none
Don’t include write-ahead logs in the backup. f
fetch`The write-ahead log files are collected at the end of the backup. Therefore, it is necessary for the source server’s wal_keep_size parameter to be set high enough that the required log data is not removed before the end of the backup. If the required log data has been recycled before it’s time to transfer it, the backup will fail and be unusable.When tar format is used, the write-ahead log files will be included in the `base.tar
file.s
stream`Stream write-ahead log data while the backup is being taken. This method will open a second connection to the server and start streaming the write-ahead log in parallel while running the backup. Therefore, it will require two replication connections not just one. As long as the client can keep up with the write-ahead log data, using this method requires no extra write-ahead logs to be saved on the source server.When tar format is used, the write-ahead log files will be written to a separate file named `pg_wal.tar
(if the server is a version earlier than 10, the file will be named pg_xlog.tar
).This value is the default.
-
-z
--gzip
Enables gzip compression of tar file output, with the default compression level. Compression is only available when using the tar format, and the suffix .gz
will automatically be added to all tar filenames.
-
-Z level
-Z method[:*detail*]
--compress=level
--compress=method[:*detail*]
Requests compression of the backup. If client
or server
is included, it specifies where the compression is to be performed. Compressing on the server will reduce transfer bandwidth but will increase server CPU consumption. The default is client
except when --target
is used. In that case, the backup is not being sent to the client, so only server compression is sensible. When -Xstream
, which is the default, is used, server-side compression will not be applied to the WAL. To compress the WAL, use client-side compression, or specify -Xfetch
.The compression method can be set to gzip
, lz4
, zstd
, or none
for no compression. A compression detail string can optionally be specified. If the detail string is an integer, it specifies the compression level. Otherwise, it should be a comma-separated list of items, each of the form keyword
or keyword=value
. Currently, the supported keywords are level
and workers
.If no compression level is specified, the default compression level will be used. If only a level is specified without mentioning an algorithm, gzip
compression will be used if the level is greater than 0, and no compression will be used if the level is 0.When the tar format is used with gzip
, lz4
, or zstd
, the suffix .gz
, .lz4
, or .zst
, respectively, will be automatically added to all tar filenames. When the plain format is used, client-side compression may not be specified, but it is still possible to request server-side compression. If this is done, the server will compress the backup for transmission, and the client will decompress and extract it.When this option is used in combination with -Xstream
, pg_wal.tar
will be compressed using gzip
if client-side gzip compression is selected, but will not be compressed if any other compression algorithm is selected, or if server-side compression is selected.
The following command-line options control the generation of the backup and the invocation of the program:
-
-c {fast|spread}
--checkpoint={fast|spread}
Sets checkpoint mode to fast (immediate) or spread (the default) .
-
-C
--create-slot
Specifies that the replication slot named by the --slot
option should be created before starting the backup. An error is raised if the slot already exists.
-
-l label
--label=label
Sets the label for the backup. If none is specified, a default value of “pg_basebackup base backup
” will be used.
-
-n
--no-clean
By default, when pg_basebackup
aborts with an error, it removes any directories it might have created before discovering that it cannot finish the job (for example, the target directory and write-ahead log directory). This option inhibits tidying-up and is thus useful for debugging.Note that tablespace directories are not cleaned up either way.
-
-N
--no-sync
By default, pg_basebackup
will wait for all files to be written safely to disk. This option causes pg_basebackup
to return without waiting, which is faster, but means that a subsequent operating system crash can leave the base backup corrupt. Generally, this option is useful for testing but should not be used when creating a production installation.
-
-P
--progress
Enables progress reporting. Turning this on will deliver an approximate progress report during the backup. Since the database may change during the backup, this is only an approximation and may not end at exactly 100%
. In particular, when WAL log is included in the backup, the total amount of data cannot be estimated in advance, and in this case the estimated target size will increase once it passes the total estimate without WAL.
-
-r rate
--max-rate=rate
Sets the maximum transfer rate at which data is collected from the source server. This can be useful to limit the impact of pg_basebackup on the server. Values are in kilobytes per second. Use a suffix of M
to indicate megabytes per second. A suffix of k
is also accepted, and has no effect. Valid values are between 32 kilobytes per second and 1024 megabytes per second.This option always affects transfer of the data directory. Transfer of WAL files is only affected if the collection method is fetch
.
-
-S slotname
--slot=slotname
This option can only be used together with -X stream
. It causes WAL streaming to use the specified replication slot. If the base backup is intended to be used as a streaming-replication standby using a replication slot, the standby should then use the same replication slot name as primary_slot_name. This ensures that the primary server does not remove any necessary WAL data in the time between the end of the base backup and the start of streaming replication on the new standby.The specified replication slot has to exist unless the option -C
is also used.If this option is not specified and the server supports temporary replication slots (version 10 and later), then a temporary replication slot is automatically used for WAL streaming.
-
-v
--verbose
Enables verbose mode. Will output some extra steps during startup and shutdown, as well as show the exact file name that is currently being processed if progress reporting is also enabled.
-
--manifest-checksums=algorithm
Specifies the checksum algorithm that should be applied to each file included in the backup manifest. Currently, the available algorithms are NONE
, CRC32C
, SHA224
, SHA256
, SHA384
, and SHA512
. The default is CRC32C
.If NONE
is selected, the backup manifest will not contain any checksums. Otherwise, it will contain a checksum of each file in the backup using the specified algorithm. In addition, the manifest will always contain a SHA256
checksum of its own contents. The SHA
algorithms are significantly more CPU-intensive than CRC32C
, so selecting one of them may increase the time required to complete the backup.Using a SHA hash function provides a cryptographically secure digest of each file for users who wish to verify that the backup has not been tampered with, while the CRC32C algorithm provides a checksum that is much faster to calculate; it is good at catching errors due to accidental changes but is not resistant to malicious modifications. Note that, to be useful against an adversary who has access to the backup, the backup manifest would need to be stored securely elsewhere or otherwise verified not to have been modified since the backup was taken. pg_verifybackup can be used to check the integrity of a backup against the backup manifest.
-
--manifest-force-encode
Forces all filenames in the backup manifest to be hex-encoded. If this option is not specified, only non-UTF8 filenames are hex-encoded. This option is mostly intended to test that tools which read a backup manifest file properly handle this case.
-
--no-estimate-size
Prevents the server from estimating the total amount of backup data that will be streamed, resulting in the backup_total
column in the pg_stat_progress_basebackup
view always being NULL
.Without this option, the backup will start by enumerating the size of the entire database, and then go back and send the actual contents. This may make the backup take slightly longer, and in particular it will take longer before the first data is sent. This option is useful to avoid such estimation time if it’s too long.This option is not allowed when using --progress
.
-
--no-manifest
Disables generation of a backup manifest. If this option is not specified, the server will generate and send a backup manifest which can be verified using pg_verifybackup. The manifest is a list of every file present in the backup with the exception of any WAL files that may be included. It also stores the size, last modification time, and an optional checksum for each file.
-
--no-slot
Prevents the creation of a temporary replication slot for the backup.By default, if log streaming is selected but no slot name is given with the -S
option, then a temporary replication slot is created (if supported by the source server).The main purpose of this option is to allow taking a base backup when the server has no free replication slots. Using a replication slot is almost always preferred, because it prevents needed WAL from being removed by the server during the backup.
-
--no-verify-checksums
Disables verification of checksums, if they are enabled on the server the base backup is taken from.By default, checksums are verified and checksum failures will result in a non-zero exit status. However, the base backup will not be removed in such a case, as if the --no-clean
option had been used. Checksum verification failures will also be reported in the pg_stat_database
view.
The following command-line options control the connection to the source server:
-
-d connstr
--dbname=connstr
Specifies parameters used to connect to the server, as a connection string; these will override any conflicting command line options.The option is called --dbname
for consistency with other client applications, but because pg_basebackup doesn’t connect to any particular database in the cluster, any database name in the connection string will be ignored.
-
-h host
--host=host
Specifies the host name of the machine on which the server is running. If the value begins with a slash, it is used as the directory for a Unix domain socket. The default is taken from the PGHOST
environment variable, if set, else a Unix domain socket connection is attempted.
-
-p port
--port=port
Specifies the TCP port or local Unix domain socket file extension on which the server is listening for connections. Defaults to the PGPORT
environment variable, if set, or a compiled-in default.
-
-s interval
--status-interval=interval
Specifies the number of seconds between status packets sent back to the source server. Smaller values allow more accurate monitoring of backup progress from the server. A value of zero disables periodic status updates completely, although an update will still be sent when requested by the server, to avoid timeout-based disconnects. The default value is 10 seconds.
-
-U username
--username=username
Specifies the user name to connect as.
-
-w
--no-password
Prevents issuing a password prompt. If the server requires password authentication and a password is not available by other means such as a .pgpass
file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.
-
-W
--password
Forces pg_basebackup to prompt for a password before connecting to the source server.This option is never essential, since pg_basebackup will automatically prompt for a password if the server demands password authentication. However, pg_basebackup will waste a connection attempt finding out that the server wants a password. In some cases it is worth typing -W
to avoid the extra connection attempt.
Other options are also available:
-
-V
--version
Prints the pg_basebackup version and exits.
-
-?
--help
Shows help about pg_basebackup command line arguments, and exits.
2.8.3. Environment
This utility, like most other IvorySQL utilities, uses the environment variables supported by libpq .
The environment variable PG_COLOR
specifies whether to use color in diagnostic messages. Possible values are always
, auto
and never
.
2.8.4. Notes
At the beginning of the backup, a checkpoint needs to be performed on the source server. This can take some time (especially if the option --checkpoint=fast
is not used), during which pg_basebackup will appear to be idle.
The backup will include all files in the data directory and tablespaces, including the configuration files and any additional files placed in the directory by third parties, except certain temporary files managed by IvorySQL. But only regular files and directories are copied, except that symbolic links used for tablespaces are preserved. Symbolic links pointing to certain directories known to IvorySQL are copied as empty directories. Other symbolic links and special device files are skipped.
In plain format, tablespaces will be backed up to the same path they have on the source server, unless the option --tablespace-mapping
is used. Without this option, running a plain format base backup on the same host as the server will not work if tablespaces are in use, because the backup would have to be written to the same directory locations as the original tablespaces.
When tar format is used, it is the user’s responsibility to unpack each tar file before starting a IvorySQL server that uses the data. If there are additional tablespaces, the tar files for them need to be unpacked in the correct locations. In this case the symbolic links for those tablespaces will be created by the server according to the contents of the tablespace_map
file that is included in the base.tar
file.
pg_basebackup works with servers of the same or an older major version.
pg_basebackup will preserve group permissions for data files if group permissions are enabled on the source cluster.
2.8.5. Examples
To create a base backup of the server at mydbserver
and store it in the local directory /usr/local/pgsql/data
:
$ pg_basebackup -h mydbserver -D /usr/local/pgsql/data
To create a backup of the local server with one compressed tar file for each tablespace, and store it in the directory backup
, showing a progress report while running:
$ pg_basebackup -D backup -Ft -z -P
To create a backup of a single-tablespace local database and compress this with bzip2:
$ pg_basebackup -D - -Ft -X fetch | bzip2 > backup.tar.bz2
(This command will fail if there are multiple tablespaces in the database.)
To create a backup of a local database where the tablespace in /opt/ts
is relocated to ./backup/ts
:
$ pg_basebackup -D backup/data -T /opt/ts=$(pwd)/backup/ts
To create a backup of a local server with one tar file for each tablespace compressed with gzip at level 9, stored in the directory backup
:
$ pg_basebackup -D backup -Ft --compress=gzip:9
2.9. pgbench
pgbench — run a benchmark test on IvorySQL
2.9.2. Caution
pgbench -i
creates four tables pgbench_accounts
, pgbench_branches
, pgbench_history
, and pgbench_tellers
, destroying any existing tables of these names. Be very careful to use another database if you have tables having these names!
At the default “scale factor” of 1, the tables initially contain this many rows:
table # of rows
---------------------------------
pgbench_branches 1
pgbench_tellers 10
pgbench_accounts 100000
pgbench_history 0
You can (and, for most purposes, probably should) increase the number of rows by using the -s
(scale factor) option. The -F
(fillfactor) option might also be used at this point.
Once you have done the necessary setup, you can run your benchmark with a command that doesn’t include -i
, that is
pgbench [ options ] dbname
In nearly all cases, you’ll need some options to make a useful test. The most important options are -c
(number of clients), -t
(number of transactions), -T
(time limit), and -f
(specify a custom script file). See below for a full list.
2.9.3. Options
The following is divided into three subsections. Different options are used during database initialization and while running benchmarks, but some options are useful in both cases.
2.9.4. Initialization Options
pgbench accepts the following command-line initialization arguments:
-
dbname
Specifies the name of the database to test in. If this is not specified, the environment variable PGDATABASE
is used. If that is not set, the user name specified for the connection is used.
-
-i
--initialize
Required to invoke initialization mode.
-
-I init_steps
--init-steps=init_steps
Perform just a selected set of the normal initialization steps. init_steps
specifies the initialization steps to be performed, using one character per step. Each step is invoked in the specified order. The default is dtgvp
. The available steps are:`d` (Drop)Drop any existing pgbench tables.t
(create Tables)Create the tables used by the standard pgbench scenario, namely pgbench_accounts
, pgbench_branches
, pgbench_history
, and pgbench_tellers
.g
or G
(Generate data, client-side or server-side)Generate data and load it into the standard tables, replacing any data already present.With g
(client-side data generation), data is generated in pgbench
client and then sent to the server. This uses the client/server bandwidth extensively through a COPY
. pgbench
uses the FREEZE option with version 14 or later of IvorySQL to speed up subsequent VACUUM
, unless partitions are enabled. Using g
causes logging to print one message every 100,000 rows while generating data for the pgbench_accounts
table.With G
(server-side data generation), only small queries are sent from the pgbench
client and then data is actually generated in the server. No significant bandwidth is required for this variant, but the server will do more work. Using G
causes logging not to print any progress message while generating data.The default initialization behavior uses client-side data generation (equivalent to g
).v
(Vacuum)Invoke VACUUM
on the standard tables.p
(create Primary keys)Create primary key indexes on the standard tables.f
(create Foreign keys)Create foreign key constraints between the standard tables. (Note that this step is not performed by default.)
-
-F
fillfactor
--fillfactor=
fillfactor
Create the pgbench_accounts
, pgbench_tellers
and pgbench_branches
tables with the given fillfactor. Default is 100.
-
-n
--no-vacuum
Perform no vacuuming during initialization. (This option suppresses the v
initialization step, even if it was specified in -I
.)
-
-q
--quiet
Switch logging to quiet mode, producing only one progress message per 5 seconds. The default logging prints one message each 100,000 rows, which often outputs many lines per second (especially on good hardware).This setting has no effect if G
is specified in -I
.
-
-s
scale_factor
--scale=
scale_factor
Multiply the number of rows generated by the scale factor. For example, -s 100
will create 10,000,000 rows in the pgbench_accounts
table. Default is 1. When the scale is 20,000 or larger, the columns used to hold account identifiers (aid
columns) will switch to using larger integers (bigint
), in order to be big enough to hold the range of account identifiers.
-
--foreign-keys
Create foreign key constraints between the standard tables. (This option adds the f
step to the initialization step sequence, if it is not already present.)
-
--index-tablespace=index_tablespace
Create indexes in the specified tablespace, rather than the default tablespace.
-
--partition-method=`NAME
`
Create a partitioned pgbench_accounts
table with NAME
method. Expected values are range
or hash
. This option requires that --partitions
is set to non-zero. If unspecified, default is range
.
-
--partitions=NUM
Create a partitioned pgbench_accounts
table with NUM
partitions of nearly equal size for the scaled number of accounts. Default is 0
, meaning no partitioning.
-
--tablespace=`tablespace
`
Create tables in the specified tablespace, rather than the default tablespace.
-
--unlogged-tables
Create all tables as unlogged tables, rather than permanent tables.
2.9.5. Benchmarking Options
pgbench accepts the following command-line benchmarking arguments:
-
-b
scriptname[@weight]
--builtin
=scriptname[@weight]
Add the specified built-in script to the list of scripts to be executed. Available built-in scripts are: tpcb-like
, simple-update
and select-only
. Unambiguous prefixes of built-in names are accepted. With the special name list
, show the list of built-in scripts and exit immediately.Optionally, write an integer weight after @
to adjust the probability of selecting this script versus other ones. The default weight is 1. See below for details.
-
-c
clients
--client=
clients
Number of clients simulated, that is, number of concurrent database sessions. Default is 1.
-
-C
--connect
Establish a new connection for each transaction, rather than doing it just once per client session. This is useful to measure the connection overhead.
-
-d
--debug
Print debugging output.
-
-D
varname
=
value
--define=
varname
=
value
Define a variable for use by a custom script (see below). Multiple -D
options are allowed.
-
-f
filename[@weight]
--file=
filename[@weight]
Add a transaction script read from filename
to the list of scripts to be executed.Optionally, write an integer weight after @
to adjust the probability of selecting this script versus other ones. The default weight is 1. (To use a script file name that includes an @
character, append a weight so that there is no ambiguity, for example filen@me@1
.) See below for details.
-
-j
threads
--jobs=
threads
Number of worker threads within pgbench. Using more than one thread can be helpful on multi-CPU machines. Clients are distributed as evenly as possible among available threads. Default is 1.
-
-l
--log
Write information about each transaction to a log file. See below for details.
-
-L
limit
--latency-limit=
limit
Transactions that last more than limit
milliseconds are counted and reported separately, as late.When throttling is used (--rate=…
), transactions that lag behind schedule by more than limit
ms, and thus have no hope of meeting the latency limit, are not sent to the server at all. They are counted and reported separately as skipped.When the --max-tries
option is used, a transaction which fails due to a serialization anomaly or from a deadlock will not be retried if the total time of all its tries is greater than limit
ms. To limit only the time of tries and not their number, use --max-tries=0
. By default, the option --max-tries
is set to 1 and transactions with serialization/deadlock errors are not retried.
-
-M
querymode
--protocol=
querymode
Protocol to use for submitting queries to the server:`simple`: use simple query protocol. extended
: use extended query protocol. prepared
: use extended query protocol with prepared statements.In the prepared
mode, pgbench reuses the parse analysis result starting from the second query iteration, so pgbench runs faster than in other modes.The default is simple query protocol.
-
-n
--no-vacuum
Perform no vacuuming before running the test. This option is necessary if you are running a custom test scenario that does not include the standard tables pgbench_accounts
, pgbench_branches
, pgbench_history
, and pgbench_tellers
.
-
-N
--skip-some-updates
Run built-in simple-update script. Shorthand for -b simple-update
.
-
-P
sec
--progress=
sec
Show progress report every sec
seconds. The report includes the time since the beginning of the run, the TPS since the last report, and the transaction latency average, standard deviation, and the number of failed transactions since the last report. Under throttling (-R
), the latency is computed with respect to the transaction scheduled start time, not the actual transaction beginning time, thus it also includes the average schedule lag time. When --max-tries
is used to enable transaction retries after serialization/deadlock errors, the report includes the number of retried transactions and the sum of all retries.
-
-r
--report-per-command
Report the following statistics for each command after the benchmark finishes: the average per-statement latency (execution time from the perspective of the client), the number of failures, and the number of retries after serialization or deadlock errors in this command. The report displays retry statistics only if the --max-tries
option is not equal to 1.
-
-R
rate
--rate=
rate
Execute transactions targeting the specified rate instead of running as fast as possible (the default). The rate is given in transactions per second. If the targeted rate is above the maximum possible rate, the rate limit won’t impact the results.The rate is targeted by starting transactions along a Poisson-distributed schedule time line. The expected start time schedule moves forward based on when the client first started, not when the previous transaction ended. That approach means that when transactions go past their original scheduled end time, it is possible for later ones to catch up again.When throttling is active, the transaction latency reported at the end of the run is calculated from the scheduled start times, so it includes the time each transaction had to wait for the previous transaction to finish. The wait time is called the schedule lag time, and its average and maximum are also reported separately. The transaction latency with respect to the actual transaction start time, i.e., the time spent executing the transaction in the database, can be computed by subtracting the schedule lag time from the reported latency.If --latency-limit
is used together with --rate
, a transaction can lag behind so much that it is already over the latency limit when the previous transaction ends, because the latency is calculated from the scheduled start time. Such transactions are not sent to the server, but are skipped altogether and counted separately.A high schedule lag time is an indication that the system cannot process transactions at the specified rate, with the chosen number of clients and threads. When the average transaction execution time is longer than the scheduled interval between each transaction, each successive transaction will fall further behind, and the schedule lag time will keep increasing the longer the test run is. When that happens, you will have to reduce the specified transaction rate.
-
-s
scale_factor
--scale=
scale_factor
Report the specified scale factor in pgbench’s output. With the built-in tests, this is not necessary; the correct scale factor will be detected by counting the number of rows in the pgbench_branches
table. However, when testing only custom benchmarks (-f
option), the scale factor will be reported as 1 unless this option is used.
-
-S
--select-only
Run built-in select-only script. Shorthand for -b select-only
.
-
-t
transactions
--transactions=
transactions
Number of transactions each client runs. Default is 10.
-
-T
seconds
--time=
seconds
Run the test for this many seconds, rather than a fixed number of transactions per client. -t
and -T
are mutually exclusive.
-
-v
--vacuum-all
Vacuum all four standard tables before running the test. With neither -n
nor -v
, pgbench will vacuum the pgbench_tellers
and pgbench_branches
tables, and will truncate pgbench_history
.
-
--aggregate-interval=`seconds
`
Length of aggregation interval (in seconds). May be used only with -l
option. With this option, the log contains per-interval summary data, as described below.
-
--failures-detailed
Report failures in per-transaction and aggregation logs, as well as in the main and per-script reports, grouped by the following types:serialization failures;deadlock failures.
-
--log-prefix=`prefix
`
Set the filename prefix for the log files created by --log
. The default is pgbench_log
.
-
--max-tries=`number_of_tries
`
Enable retries for transactions with serialization/deadlock errors and set the maximum number of these tries. This option can be combined with the --latency-limit
option which limits the total time of all transaction tries; moreover, you cannot use an unlimited number of tries (--max-tries=0
) without --latency-limit
or --time
. The default value is 1 and transactions with serialization/deadlock errors are not retried.
-
--progress-timestamp
When showing progress (option -P
), use a timestamp (Unix epoch) instead of the number of seconds since the beginning of the run. The unit is in seconds, with millisecond precision after the dot. This helps compare logs generated by various tools.
-
--random-seed=
seed
Set random generator seed. Seeds the system random number generator, which then produces a sequence of initial generator states, one for each thread. Values for seed
may be: time
(the default, the seed is based on the current time), rand
(use a strong random source, failing if none is available), or an unsigned decimal integer value. The random generator is invoked explicitly from a pgbench script (random…
functions) or implicitly (for instance option --rate
uses it to schedule transactions). When explicitly set, the value used for seeding is shown on the terminal. Any value allowed for seed
may also be provided through the environment variable PGBENCH_RANDOM_SEED
. To ensure that the provided seed impacts all possible uses, put this option first or use the environment variable.Setting the seed explicitly allows to reproduce a pgbench
run exactly, as far as random numbers are concerned. As the random state is managed per thread, this means the exact same pgbench
run for an identical invocation if there is one client per thread and there are no external or data dependencies. From a statistical viewpoint reproducing runs exactly is a bad idea because it can hide the performance variability or improve performance unduly, e.g., by hitting the same pages as a previous run. However, it may also be of great help for debugging, for instance re-running a tricky case which leads to an error. Use wisely.
-
--sampling-rate=`rate
`
Sampling rate, used when writing data into the log, to reduce the amount of log generated. If this option is given, only the specified fraction of transactions are logged. 1.0 means all transactions will be logged, 0.05 means only 5% of the transactions will be logged.Remember to take the sampling rate into account when processing the log file. For example, when computing TPS values, you need to multiply the numbers accordingly (e.g., with 0.01 sample rate, you’ll only get 1/100 of the actual TPS).
-
--show-script=
scriptname
Show the actual code of builtin script scriptname
on stderr, and exit immediately.
-
--verbose-errors
Print messages about all errors and failures (errors without retrying) including which limit for retries was exceeded and how far it was exceeded for the serialization/deadlock failures. (Note that in this case the output can be significantly increased.).
2.9.6. Common Options
pgbench also accepts the following common command-line arguments for connection parameters:
-
-h
hostname
--host=
hostname
The database server’s host name
-
-p
port
--port=
port
The database server’s port number
-
-U
login
--username=
login
The user name to connect as
-
-V
--version
Print the pgbench version and exit.
-
-?
--help
Show help about pgbench command line arguments, and exit.
2.9.7. Exit Status
A successful run will exit with status 0. Exit status 1 indicates static problems such as invalid command-line options or internal errors which are supposed to never occur. Early errors that occur when starting benchmark such as initial connection failures also exit with status 1. Errors during the run such as database errors or problems in the script will result in exit status 2. In the latter case, pgbench will print partial results.
2.9.8. Environment
-
PGDATABASE
PGHOST
PGPORT
PGUSER
Default connection parameters.
This utility, like most other IvorySQL utilities, uses the environment variables supported by libpq .
The environment variable PG_COLOR
specifies whether to use color in diagnostic messages. Possible values are always
, auto
and never
.
2.10. pg_config
pg_config — retrieve information about the installed version of IvorySQL
2.10.2. Options
To use pg_config, supply one or more of the following options:
-
--bindir
Print the location of user executables. Use this, for example, to find the psql
program. This is normally also the location where the pg_config
program resides.
-
--docdir
Print the location of documentation files.
-
--htmldir
Print the location of HTML documentation files.
-
--includedir
Print the location of C header files of the client interfaces.
-
--pkgincludedir
Print the location of other C header files.
-
--includedir-server
Print the location of C header files for server programming.
-
--libdir
Print the location of object code libraries.
-
--pkglibdir
Print the location of dynamically loadable modules, or where the server would search for them. (Other architecture-dependent data files might also be installed in this directory.)
-
--localedir
Print the location of locale support files. (This will be an empty string if locale support was not configured when IvorySQL was built.)
-
--mandir
Print the location of manual pages.
-
--sharedir
Print the location of architecture-independent support files.
-
--sysconfdir
Print the location of system-wide configuration files.
-
--pgxs
Print the location of extension makefiles.
-
--configure
Print the options that were given to the configure
script when IvorySQL was configured for building. This can be used to reproduce the identical configuration, or to find out with what options a binary package was built. (Note however that binary packages often contain vendor-specific custom patches.) See also the examples below.
-
--cc
Print the value of the CC
variable that was used for building IvorySQL. This shows the C compiler used.
-
--cppflags
Print the value of the CPPFLAGS
variable that was used for building IvorySQL. This shows C compiler switches needed at preprocessing time (typically, -I
switches).
-
--cflags
Print the value of the CFLAGS
variable that was used for building IvorySQL. This shows C compiler switches.
-
--cflags_sl
Print the value of the CFLAGS_SL
variable that was used for building IvorySQL. This shows extra C compiler switches used for building shared libraries.
-
--ldflags
Print the value of the LDFLAGS
variable that was used for building IvorySQL. This shows linker switches.
-
--ldflags_ex
Print the value of the LDFLAGS_EX
variable that was used for building IvorySQL. This shows linker switches used for building executables only.
-
--ldflags_sl
Print the value of the LDFLAGS_SL
variable that was used for building IvorySQL. This shows linker switches used for building shared libraries only.
-
--libs
Print the value of the LIBS
variable that was used for building IvorySQL. This normally contains -l
switches for external libraries linked into IvorySQL.
-
--version
Print the version of IvorySQL.
-
-?
--help
Show help about pg_config command line arguments, and exit.
If more than one option is given, the information is printed in that order, one item per line. If no options are given, all available information is printed, with labels.
2.10.3. Example
To reproduce the build configuration of the current IvorySQL installation, run the following command:
eval ./configure `pg_config --configure`
The output of pg_config --configure
contains shell quotation marks so arguments with spaces are represented correctly. Therefore, using eval
is required for proper results.
2.11. pg_dump
pg_dump — extract a IvorySQL database into a script file or other archive file
2.11.2. Options
The following command-line options control the content and format of the output.
-
dbname
Specifies the name of the database to be dumped. If this is not specified, the environment variable `PGDATABASE` is used. If that is not set, the user name specified for the connection is used.
-
-a
--data-only
Dump only the data, not the schema (data definitions). Table data, large objects, and sequence values are dumped.This option is similar to, but for historical reasons not identical to, specifying `--section=data`.
-
-b
--blobs
Include large objects in the dump. This is the default behavior except when `--schema`, `--table`, or `--schema-only` is specified. The `-b` switch is therefore only useful to add large objects to dumps where a specific schema or table has been requested. Note that blobs are considered data and therefore will be included when `--data-only` is used, but not when `--schema-only` is.
-
-B
--no-blobs
Exclude large objects in the dump.When both `-b` and `-B` are given, the behavior is to output large objects, when data is being dumped, see the `-b` documentation.
-
-c
--clean
Output commands to clean (drop) database objects prior to outputting the commands for creating them. (Unless `--if-exists` is also specified, restore might generate some harmless error messages, if any objects were not present in the destination database.)This option is ignored when emitting an archive (non-text) output file. For the archive formats, you can specify the option when you call `pg_restore`.
-
-C
--create
Begin the output with a command to create the database itself and reconnect to the created database. (With a script of this form, it doesn't matter which database in the destination installation you connect to before running the script.) If `--clean` is also specified, the script drops and recreates the target database before reconnecting to it.With `--create`, the output also includes the database's comment if any, and any configuration variable settings that are specific to this database, that is, any `ALTER DATABASE ... SET ...` and `ALTER ROLE ... IN DATABASE ... SET ...` commands that mention this database. Access privileges for the database itself are also dumped, unless `--no-acl` is specified.This option is ignored when emitting an archive (non-text) output file. For the archive formats, you can specify the option when you call `pg_restore`.
-
-e `pattern
`--extension=`pattern
`Dump only extensions matching *`pattern`*. When this option is not specified, all non-system extensions in the target database will be dumped. Multiple extensions can be selected by writing multiple `-e` switches. The *`pattern`* parameter is interpreted as a pattern according to the same rules used by psql's `\d` commands , so multiple extensions can also be selected by writing wildcard characters in the pattern. When using wildcards, be careful to quote the pattern if needed to prevent the shell from expanding the wildcards.Any configuration relation registered by `pg_extension_config_dump` is included in the dump if its extension is specified by `--extension`.NoteWhen `-e` is specified, pg_dump makes no attempt to dump any other database objects that the selected extension(s) might depend upon. Therefore, there is no guarantee that the results of a specific-extension dump can be successfully restored by themselves into a clean database.
-
-E `encoding
`--encoding=`encoding
`Create the dump in the specified character set encoding. By default, the dump is created in the database encoding. (Another way to get the same result is to set the `PGCLIENTENCODING` environment variable to the desired dump encoding.)
-
-f `file
`--file=`file
`Send output to the specified file. This parameter can be omitted for file based output formats, in which case the standard output is used. It must be given for the directory output format however, where it specifies the target directory instead of a file. In this case the directory is created by `pg_dump` and must not exist before.
-
-F `format
`--format=`format
`Selects the format of the output. *`format`* can be one of the following:`p` `plain`Output a plain-text SQL script file (the default).`c` `custom`Output a custom-format archive suitable for input into pg_restore. Together with the directory output format, this is the most flexible output format in that it allows manual selection and reordering of archived items during restore. This format is also compressed by default.`d` `directory`Output a directory-format archive suitable for input into pg_restore. This will create a directory with one file for each table and blob being dumped, plus a so-called Table of Contents file describing the dumped objects in a machine-readable format that pg_restore can read. A directory format archive can be manipulated with standard Unix tools; for example, files in an uncompressed archive can be compressed with the gzip tool. This format is compressed by default and also supports parallel dumps.`t` `tar`Output a `tar`-format archive suitable for input into pg_restore. The tar format is compatible with the directory format: extracting a tar-format archive produces a valid directory-format archive. However, the tar format does not support compression. Also, when using tar format the relative order of table data items cannot be changed during restore.
-
-j `njobs
`--jobs=`njobs
`Run the dump in parallel by dumping *`njobs`* tables simultaneously. This option may reduce the time needed to perform the dump but it also increases the load on the database server. You can only use this option with the directory output format because this is the only output format where multiple processes can write their data at the same time.pg_dump will open *`njobs`* + 1 connections to the database, so make sure your [max_connections](https://www.IvorySQL.org/docs/current/runtime-config-connection.html#GUC-MAX-CONNECTIONS) setting is high enough to accommodate all connections.Requesting exclusive locks on database objects while running a parallel dump could cause the dump to fail. The reason is that the pg_dump leader process requests shared locks ([ACCESS SHARE](https://www.IvorySQL.org/docs/current/explicit-locking.html#LOCKING-TABLES)) on the objects that the worker processes are going to dump later in order to make sure that nobody deletes them and makes them go away while the dump is running. If another client then requests an exclusive lock on a table, that lock will not be granted but will be queued waiting for the shared lock of the leader process to be released. Consequently any other access to the table will not be granted either and will queue after the exclusive lock request. This includes the worker process trying to dump the table. Without any precautions this would be a classic deadlock situation. To detect this conflict, the pg_dump worker process requests another shared lock using the `NOWAIT` option. If the worker process is not granted this shared lock, somebody else must have requested an exclusive lock in the meantime and there is no way to continue with the dump, so pg_dump has no choice but to abort the dump.To perform a parallel dump, the database server needs to support synchronized snapshots, a feature that was introduced in IvorySQL for primary servers and 10 for standbys. With this feature, database clients can ensure they see the same data set even though they use different connections. `pg_dump -j` uses multiple database connections; it connects to the database once with the leader process and once again for each worker job. Without the synchronized snapshot feature, the different worker jobs wouldn't be guaranteed to see the same data in each connection, which could lead to an inconsistent backup.
-
-n `pattern
`--schema=`pattern
`Dump only schemas matching *`pattern`*; this selects both the schema itself, and all its contained objects. When this option is not specified, all non-system schemas in the target database will be dumped. Multiple schemas can be selected by writing multiple `-n` switches. The *`pattern`* parameter is interpreted as a pattern according to the same rules used by psql's `\d` commands , so multiple schemas can also be selected by writing wildcard characters in the pattern. When using wildcards, be careful to quote the pattern if needed to prevent the shell from expanding the wildcards; see [Examples](https://www.IvorySQL.org/docs/current/app-pgdump.html#PG-DUMP-EXAMPLES) below.NoteWhen `-n` is specified, pg_dump makes no attempt to dump any other database objects that the selected schema(s) might depend upon. Therefore, there is no guarantee that the results of a specific-schema dump can be successfully restored by themselves into a clean database.NoteNon-schema objects such as blobs are not dumped when `-n` is specified. You can add blobs back to the dump with the `--blobs` switch.
-
-N `pattern
`--exclude-schema=`pattern
`Do not dump any schemas matching *`pattern`*. The pattern is interpreted according to the same rules as for `-n`. `-N` can be given more than once to exclude schemas matching any of several patterns.When both `-n` and `-N` are given, the behavior is to dump just the schemas that match at least one `-n` switch but no `-N` switches. If `-N` appears without `-n`, then schemas matching `-N` are excluded from what is otherwise a normal dump.
-
-O
--no-owner
Do not output commands to set ownership of objects to match the original database. By default, pg_dump issues `ALTER OWNER` or `SET SESSION AUTHORIZATION` statements to set ownership of created database objects. These statements will fail when the script is run unless it is started by a superuser (or the same user that owns all of the objects in the script). To make a script that can be restored by any user, but will give that user ownership of all the objects, specify `-O`.This option is ignored when emitting an archive (non-text) output file. For the archive formats, you can specify the option when you call `pg_restore`.
-
-R
--no-reconnect
This option is obsolete but still accepted for backwards compatibility.
-
-s
--schema-only
Dump only the object definitions (schema), not data.This option is the inverse of `--data-only`. It is similar to, but for historical reasons not identical to, specifying `--section=pre-data --section=post-data`.(Do not confuse this with the `--schema` option, which uses the word “schema” in a different meaning.)To exclude table data for only a subset of tables in the database, see `--exclude-table-data`.
-
-S `username
`--superuser=`username
`Specify the superuser user name to use when disabling triggers. This is relevant only if `--disable-triggers` is used. (Usually, it's better to leave this out, and instead start the resulting script as superuser.)
-
-t `pattern
`--table=`pattern
`Dump only tables with names matching *`pattern`*. Multiple tables can be selected by writing multiple `-t` switches. The *`pattern`* parameter is interpreted as a pattern according to the same rules used by psql's `\d` commands , so multiple tables can also be selected by writing wildcard characters in the pattern. When using wildcards, be careful to quote the pattern if needed to prevent the shell from expanding the wildcards; As well as tables, this option can be used to dump the definition of matching views, materialized views, foreign tables, and sequences. It will not dump the contents of views or materialized views, and the contents of foreign tables will only be dumped if the corresponding foreign server is specified with `--include-foreign-data`.The `-n` and `-N` switches have no effect when `-t` is used, because tables selected by `-t` will be dumped regardless of those switches, and non-table objects will not be dumped.NoteWhen `-t` is specified, pg_dump makes no attempt to dump any other database objects that the selected table(s) might depend upon. Therefore, there is no guarantee that the results of a specific-table dump can be successfully restored by themselves into a clean database.
-
-T `pattern
`--exclude-table=`pattern
`Do not dump any tables matching *`pattern`*. The pattern is interpreted according to the same rules as for `-t`. `-T` can be given more than once to exclude tables matching any of several patterns.When both `-t` and `-T` are given, the behavior is to dump just the tables that match at least one `-t` switch but no `-T` switches. If `-T` appears without `-t`, then tables matching `-T` are excluded from what is otherwise a normal dump.
-
-v
--verbose
Specifies verbose mode. This will cause pg_dump to output detailed object comments and start/stop times to the dump file, and progress messages to standard error. Repeating the option causes additional debug-level messages to appear on standard error.
-
-V
--version
Print the pg_dump version and exit.
-
-x
--no-privileges
--no-acl
Prevent dumping of access privileges (grant/revoke commands).
-
-Z `0..9
`--compress=`0..9
`Specify the compression level to use. Zero means no compression. For the custom and directory archive formats, this specifies compression of individual table-data segments, and the default is to compress at a moderate level. For plain text output, setting a nonzero compression level causes the entire output file to be compressed, as though it had been fed through gzip; but the default is not to compress. The tar archive format currently does not support compression at all.
-
--binary-upgrade
This option is for use by in-place upgrade utilities. Its use for other purposes is not recommended or supported. The behavior of the option may change in future releases without notice.
-
--column-inserts
--attribute-inserts
Dump data as `INSERT` commands with explicit column names (`INSERT INTO *`table`* (*`column`*, ...) VALUES ...`). This will make restoration very slow; it is mainly useful for making dumps that can be loaded into non-IvorySQL databases. Any error during restoring will cause only rows that are part of the problematic `INSERT` to be lost, rather than the entire table contents.
-
--disable-dollar-quoting
This option disables the use of dollar quoting for function bodies, and forces them to be quoted using SQL standard string syntax.
-
--disable-triggers
This option is relevant only when creating a data-only dump. It instructs pg_dump to include commands to temporarily disable triggers on the target tables while the data is restored. Use this if you have referential integrity checks or other triggers on the tables that you do not want to invoke during data restore.Presently, the commands emitted for `--disable-triggers` must be done as superuser. So, you should also specify a superuser name with `-S`, or preferably be careful to start the resulting script as a superuser.This option is ignored when emitting an archive (non-text) output file. For the archive formats, you can specify the option when you call `pg_restore`.
-
--enable-row-security
This option is relevant only when dumping the contents of a table which has row security. By default, pg_dump will set [row_security](https://www.IvorySQL.org/docs/current/runtime-config-client.html#GUC-ROW-SECURITY) to off, to ensure that all data is dumped from the table. If the user does not have sufficient privileges to bypass row security, then an error is thrown. This parameter instructs pg_dump to set [row_security](https://www.IvorySQL.org/docs/current/runtime-config-client.html#GUC-ROW-SECURITY) to on instead, allowing the user to dump the parts of the contents of the table that they have access to.Note that if you use this option currently, you probably also want the dump be in `INSERT` format, as the `COPY FROM` during restore does not support row security.
-
--exclude-table-data=`pattern
`Do not dump data for any tables matching *`pattern`*. The pattern is interpreted according to the same rules as for `-t`. `--exclude-table-data` can be given more than once to exclude tables matching any of several patterns. This option is useful when you need the definition of a particular table even though you do not need the data in it.To exclude data for all tables in the database, see `--schema-only`.
-
--extra-float-digits=`ndigits
`Use the specified value of `extra_float_digits` when dumping floating-point data, instead of the maximum available precision. Routine dumps made for backup purposes should not use this option.
-
--if-exists
Use conditional commands (i.e., add an `IF EXISTS` clause) when cleaning database objects. This option is not valid unless `--clean` is also specified.
-
--include-foreign-data=`foreignserver
`Dump the data for any foreign table with a foreign server matching *`foreignserver`* pattern. Multiple foreign servers can be selected by writing multiple `--include-foreign-data` switches. Also, the *`foreignserver`* parameter is interpreted as a pattern according to the same rules used by psql's `\d` commands, so multiple foreign servers can also be selected by writing wildcard characters in the pattern. When using wildcards, be careful to quote the pattern if needed to prevent the shell from expanding the wildcards; The only exception is that an empty pattern is disallowed.NoteWhen `--include-foreign-data` is specified, pg_dump does not check that the foreign table is writable. Therefore, there is no guarantee that the results of a foreign table dump can be successfully restored.
-
--inserts
Dump data as `INSERT` commands (rather than `COPY`). This will make restoration very slow; it is mainly useful for making dumps that can be loaded into non-IvorySQL databases. Any error during restoring will cause only rows that are part of the problematic `INSERT` to be lost, rather than the entire table contents. Note that the restore might fail altogether if you have rearranged column order. The `--column-inserts` option is safe against column order changes, though even slower.
-
--load-via-partition-root
When dumping data for a table partition, make the `COPY` or `INSERT` statements target the root of the partitioning hierarchy that contains it, rather than the partition itself. This causes the appropriate partition to be re-determined for each row when the data is loaded. This may be useful when restoring data on a server where rows do not always fall into the same partitions as they did on the original server. That could happen, for example, if the partitioning column is of type text and the two systems have different definitions of the collation used to sort the partitioning column.It is best not to use parallelism when restoring from an archive made with this option, because pg_restore will not know exactly which partition(s) a given archive data item will load data into. This could result in inefficiency due to lock conflicts between parallel jobs, or perhaps even restore failures due to foreign key constraints being set up before all the relevant data is loaded.
-
--lock-wait-timeout=`timeout
`Do not wait forever to acquire shared table locks at the beginning of the dump. Instead fail if unable to lock a table within the specified *`timeout`*. The timeout may be specified in any of the formats accepted by `SET statement_timeout`
-
--no-comments
Do not dump comments.
-
--no-publications
Do not dump publications.
-
--no-security-labels
Do not dump security labels.
-
--no-subscriptions
Do not dump subscriptions.
-
--no-sync
By default, `pg_dump` will wait for all files to be written safely to disk. This option causes `pg_dump` to return without waiting, which is faster, but means that a subsequent operating system crash can leave the dump corrupt. Generally, this option is useful for testing but should not be used when dumping data from production installation.
-
--no-tablespaces
Do not output commands to select tablespaces. With this option, all objects will be created in whichever tablespace is the default during restore.This option is ignored when emitting an archive (non-text) output file. For the archive formats, you can specify the option when you call `pg_restore`.
-
--no-toast-compression
Do not output commands to set TOAST compression methods. With this option, all columns will be restored with the default compression setting.
-
--no-unlogged-table-data
Do not dump the contents of unlogged tables and sequences. This option has no effect on whether or not the table and sequence definitions (schema) are dumped; it only suppresses dumping the table and sequence data. Data in unlogged tables and sequences is always excluded when dumping from a standby server.
-
--on-conflict-do-nothing
Add `ON CONFLICT DO NOTHING` to `INSERT` commands. This option is not valid unless `--inserts`, `--column-inserts` or `--rows-per-insert` is also specified.
-
--quote-all-identifiers
Force quoting of all identifiers. This option is recommended when dumping a database from a server whose IvorySQL major version is different from pg_dump's, or when the output is intended to be loaded into a server of a different major version. By default, pg_dump quotes only identifiers that are reserved words in its own major version. This sometimes results in compatibility issues when dealing with servers of other versions that may have slightly different sets of reserved words. Using `--quote-all-identifiers` prevents such issues, at the price of a harder-to-read dump script.
-
--rows-per-insert=`nrows
`Dump data as `INSERT` commands (rather than `COPY`). Controls the maximum number of rows per `INSERT` command. The value specified must be a number greater than zero. Any error during restoring will cause only rows that are part of the problematic `INSERT` to be lost, rather than the entire table contents.
-
--section=`sectionname
`Only dump the named section. The section name can be `pre-data`, `data`, or `post-data`. This option can be specified more than once to select multiple sections. The default is to dump all sections.The data section contains actual table data, large-object contents, and sequence values. Post-data items include definitions of indexes, triggers, rules, and constraints other than validated check constraints. Pre-data items include all other data definition items.
-
--serializable-deferrable
Use a `serializable` transaction for the dump, to ensure that the snapshot used is consistent with later database states; but do this by waiting for a point in the transaction stream at which no anomalies can be present, so that there isn't a risk of the dump failing or causing other transactions to roll back with a `serialization_failure`.This option is not beneficial for a dump which is intended only for disaster recovery. It could be useful for a dump used to load a copy of the database for reporting or other read-only load sharing while the original database continues to be updated. Without it the dump may reflect a state which is not consistent with any serial execution of the transactions eventually committed. For example, if batch processing techniques are used, a batch may show as closed in the dump without all of the items which are in the batch appearing.This option will make no difference if there are no read-write transactions active when pg_dump is started. If read-write transactions are active, the start of the dump may be delayed for an indeterminate length of time. Once running, performance with or without the switch is the same.
-
--snapshot=`snapshotname
`Use the specified synchronized snapshot when making a dump of the database.This option is useful when needing to synchronize the dump with a logical replication slot or with a concurrent session.In the case of a parallel dump, the snapshot name defined by this option is used rather than taking a new snapshot.
-
--strict-names
Require that each extension (`-e`/`--extension`), schema (`-n`/`--schema`) and table (`-t`/`--table`) qualifier match at least one extension/schema/table in the database to be dumped. Note that if none of the extension/schema/table qualifiers find matches, pg_dump will generate an error even without `--strict-names`.This option has no effect on `-N`/`--exclude-schema`, `-T`/`--exclude-table`, or `--exclude-table-data`. An exclude pattern failing to match any objects is not considered an error.
-
--use-set-session-authorization
Output SQL-standard `SET SESSION AUTHORIZATION` commands instead of `ALTER OWNER` commands to determine object ownership. This makes the dump more standards-compatible, but depending on the history of the objects in the dump, might not restore properly. Also, a dump using `SET SESSION AUTHORIZATION` will certainly require superuser privileges to restore correctly, whereas `ALTER OWNER` requires lesser privileges.
-
-?
--help
Show help about pg_dump command line arguments, and exit.
The following command-line options control the database connection parameters.
-
-d `dbname
`--dbname=`dbname
`Specifies the name of the database to connect to. This is equivalent to specifying *`dbname`* as the first non-option argument on the command line. The *`dbname`* can be a [connection string](https://www.IvorySQL.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING). If so, connection string parameters will override any conflicting command line options.
-
-h `host
`--host=`host
`Specifies the host name of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the Unix domain socket. The default is taken from the `PGHOST` environment variable, if set, else a Unix domain socket connection is attempted.
-
-p `port
`--port=`port
`Specifies the TCP port or local Unix domain socket file extension on which the server is listening for connections. Defaults to the `PGPORT` environment variable, if set, or a compiled-in default.
-
-U `username
`--username=`username
`User name to connect as.
-
-w
--no-password
Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a `.pgpass` file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.
-
-W
--password
Force pg_dump to prompt for a password before connecting to a database.This option is never essential, since pg_dump will automatically prompt for a password if the server demands password authentication. However, pg_dump will waste a connection attempt finding out that the server wants a password. In some cases it is worth typing `-W` to avoid the extra connection attempt.
-
--role=`rolename
`Specifies a role name to be used to create the dump. This option causes pg_dump to issue a `SET ROLE` *`rolename`* command after connecting to the database. It is useful when the authenticated user (specified by `-U`) lacks privileges needed by pg_dump, but can switch to a role with the required rights. Some installations have a policy against logging in directly as a superuser, and use of this option allows dumps to be made without violating the policy.
2.11.3. Environment
-
PGDATABASE
PGHOST
PGOPTIONS
PGPORT
PGUSER
Default connection parameters.
-
PG_COLOR
Specifies whether to use color in diagnostic messages. Possible values are `always`, `auto` and `never`.
This utility, like most other IvorySQL utilities, also uses the environment variables supported by libpq.
2.11.4. Diagnostics
pg_dump internally executes SELECT
statements. If you have problems running pg_dump, make sure you are able to select information from the database using, for example, [psql](https://www.IvorySQL.org/docs/current/app-psql.html). Also, any default connection settings and environment variables used by the libpq front-end library will apply.
The database activity of pg_dump is normally collected by the cumulative statistics system. If this is undesirable, you can set parameter track_counts
to false via PGOPTIONS
or the ALTER USER
command.
2.11.5. Notes
If your database cluster has any local additions to the template1
database, be careful to restore the output of pg_dump into a truly empty database; otherwise you are likely to get errors due to duplicate definitions of the added objects. To make an empty database without any local additions, copy from template0
not template1
, for example:
CREATE DATABASE foo WITH TEMPLATE template0;
When a data-only dump is chosen and the option --disable-triggers
is used, pg_dump emits commands to disable triggers on user tables before inserting the data, and then commands to re-enable them after the data has been inserted. If the restore is stopped in the middle, the system catalogs might be left in the wrong state.
The dump file produced by pg_dump does not contain the statistics used by the optimizer to make query planning decisions. Therefore, it is wise to run ANALYZE
after restoring from a dump file to ensure optimal performance.
When dumping logical replication subscriptions, pg_dump will generate CREATE SUBSCRIPTION
commands that use the connect = false
option, so that restoring the subscription does not make remote connections for creating a replication slot or for initial table copy. That way, the dump can be restored without requiring network access to the remote servers. It is then up to the user to reactivate the subscriptions in a suitable way. If the involved hosts have changed, the connection information might have to be changed. It might also be appropriate to truncate the target tables before initiating a new full table copy. If users intend to copy initial data during refresh they must create the slot with two_phase = false
. After the initial sync, the two_phase
option will be automatically enabled by the subscriber if the subscription had been originally created with two_phase = true
option.
2.11.6. Examples
To dump a database called mydb
into an SQL-script file:
$ pg_dump mydb > db.sql
To reload such a script into a (freshly created) database named newdb
:
$ psql -d newdb -f db.sql
To dump a database into a custom-format archive file:
$ pg_dump -Fc mydb > db.dump
To dump a database into a directory-format archive:
$ pg_dump -Fd mydb -f dumpdir
To dump a database into a directory-format archive in parallel with 5 worker jobs:
$ pg_dump -Fd mydb -j 5 -f dumpdir
To reload an archive file into a (freshly created) database named newdb
:
$ pg_restore -d newdb db.dump
To reload an archive file into the same database it was dumped from, discarding the current contents of that database:
$ pg_restore -d postgres --clean --create db.dump
To dump a single table named mytab
:
$ pg_dump -t mytab mydb > db.sql
To dump all tables whose names start with emp
in the detroit
schema, except for the table named employee_log
:
$ pg_dump -t 'detroit.emp*' -T detroit.employee_log mydb > db.sql
To dump all schemas whose names start with east
or west
and end in gsm
, excluding any schemas whose names contain the word test
:
$ pg_dump -n 'east*gsm' -n 'west*gsm' -N '*test*' mydb > db.sql
The same, using regular expression notation to consolidate the switches:
$ pg_dump -n '(east|west)*gsm' -N '*test*' mydb > db.sql
To dump all database objects except for tables whose names begin with ts_
:
$ pg_dump -T 'ts_*' mydb > db.sql
To specify an upper-case or mixed-case name in -t
and related switches, you need to double-quote the name; else it will be folded to lower case.But double quotes are special to the shell, so in turn they must be quoted. Thus, to dump a single table with a mixed-case name, you need something like
$ pg_dump -t "\"MixedCaseName\"" mydb > mytab.sql
2.12. pg_dumpall
pg_dumpall — extract a IvorySQL database cluster into a script file
2.12.2. Options
The following command-line options control the content and format of the output.
-
-a
--data-only
Dump only the data, not the schema (data definitions).
-
-c
--clean
Include SQL commands to clean (drop) databases before recreating them. DROP
commands for roles and tablespaces are added as well.
-
-E encoding
--encoding=encoding
Create the dump in the specified character set encoding. By default, the dump is created in the database encoding. (Another way to get the same result is to set the PGCLIENTENCODING
environment variable to the desired dump encoding.)
-
-f filename
--file=filename
Send output to the specified file. If this is omitted, the standard output is used.
-
-g
--globals-only
Dump only global objects (roles and tablespaces), no databases.
-
-O
--no-owner
Do not output commands to set ownership of objects to match the original database. By default, pg_dumpall issues ALTER OWNER
or SET SESSION AUTHORIZATION
statements to set ownership of created schema elements. These statements will fail when the script is run unless it is started by a superuser (or the same user that owns all of the objects in the script). To make a script that can be restored by any user, but will give that user ownership of all the objects, specify -O
.
-
-r
--roles-only
Dump only roles, no databases or tablespaces.
-
-s
--schema-only
Dump only the object definitions (schema), not data.
-
-S username
--superuser=username
Specify the superuser user name to use when disabling triggers. This is relevant only if --disable-triggers
is used. (Usually, it’s better to leave this out, and instead start the resulting script as superuser.)
-
-t
--tablespaces-only
Dump only tablespaces, no databases or roles.
-
-v
--verbose
Specifies verbose mode. This will cause pg_dumpall to output start/stop times to the dump file, and progress messages to standard error. Repeating the option causes additional debug-level messages to appear on standard error. The option is also passed down to pg_dump.
-
-V
--version
Print the pg_dumpall version and exit.
-
-x
--no-privileges
--no-acl
Prevent dumping of access privileges (grant/revoke commands).
-
--binary-upgrade
This option is for use by in-place upgrade utilities. Its use for other purposes is not recommended or supported. The behavior of the option may change in future releases without notice.
-
--column-inserts
--attribute-inserts
Dump data as INSERT
commands with explicit column names (INSERT INTO table (column, …) VALUES …
). This will make restoration very slow; it is mainly useful for making dumps that can be loaded into non-IvorySQL databases.
-
--disable-dollar-quoting
This option disables the use of dollar quoting for function bodies, and forces them to be quoted using SQL standard string syntax.
-
--disable-triggers
This option is relevant only when creating a data-only dump. It instructs pg_dumpall to include commands to temporarily disable triggers on the target tables while the data is restored. Use this if you have referential integrity checks or other triggers on the tables that you do not want to invoke during data restore.Presently, the commands emitted for --disable-triggers
must be done as superuser. So, you should also specify a superuser name with -S
, or preferably be careful to start the resulting script as a superuser.
-
--exclude-database=`pattern
`
Do not dump databases whose name matches pattern
. Multiple patterns can be excluded by writing multiple --exclude-database
switches. The pattern
parameter is interpreted as a pattern according to the same rules used by psql’s \d
commands, so multiple databases can also be excluded by writing wildcard characters in the pattern. When using wildcards, be careful to quote the pattern if needed to prevent shell wildcard expansion.
-
--extra-float-digits=`ndigits
`
Use the specified value of extra_float_digits when dumping floating-point data, instead of the maximum available precision. Routine dumps made for backup purposes should not use this option.
-
--if-exists
Use conditional commands (i.e., add an IF EXISTS
clause) to drop databases and other objects. This option is not valid unless --clean
is also specified.
-
--inserts
Dump data as INSERT
commands (rather than COPY
). This will make restoration very slow; it is mainly useful for making dumps that can be loaded into non-IvorySQL databases. Note that the restore might fail altogether if you have rearranged column order. The --column-inserts
option is safer, though even slower.
-
--load-via-partition-root
When dumping data for a table partition, make the COPY
or INSERT
statements target the root of the partitioning hierarchy that contains it, rather than the partition itself. This causes the appropriate partition to be re-determined for each row when the data is loaded. This may be useful when restoring data on a server where rows do not always fall into the same partitions as they did on the original server. That could happen, for example, if the partitioning column is of type text and the two systems have different definitions of the collation used to sort the partitioning column.
-
--lock-wait-timeout=`timeout
`
Do not wait forever to acquire shared table locks at the beginning of the dump. Instead, fail if unable to lock a table within the specified timeout
. The timeout may be specified in any of the formats accepted by SET statement_timeout
.
-
--no-comments
Do not dump comments.
-
--no-publications
Do not dump publications.
-
--no-role-passwords
Do not dump passwords for roles. When restored, roles will have a null password, and password authentication will always fail until the password is set. Since password values aren’t needed when this option is specified, the role information is read from the catalog view pg_roles
instead of pg_authid
. Therefore, this option also helps if access to pg_authid
is restricted by some security policy.
-
--no-security-labels
Do not dump security labels.
-
--no-subscriptions
Do not dump subscriptions.
-
--no-sync
By default, pg_dumpall
will wait for all files to be written safely to disk. This option causes pg_dumpall
to return without waiting, which is faster, but means that a subsequent operating system crash can leave the dump corrupt. Generally, this option is useful for testing but should not be used when dumping data from production installation.
-
--no-table-access-method
Do not output commands to select table access methods. With this option, all objects will be created with whichever table access method is the default during restore.
-
--no-tablespaces
Do not output commands to create tablespaces nor select tablespaces for objects. With this option, all objects will be created in whichever tablespace is the default during restore.
-
--no-toast-compression
Do not output commands to set TOAST compression methods. With this option, all columns will be restored with the default compression setting.
-
--no-unlogged-table-data
Do not dump the contents of unlogged tables. This option has no effect on whether or not the table definitions (schema) are dumped; it only suppresses dumping the table data.
-
--on-conflict-do-nothing
Add ON CONFLICT DO NOTHING
to INSERT
commands. This option is not valid unless --inserts
or --column-inserts
is also specified.
-
--quote-all-identifiers
Force quoting of all identifiers. This option is recommended when dumping a database from a server whose IvorySQL major version is different from pg_dumpall’s, or when the output is intended to be loaded into a server of a different major version. By default, pg_dumpall quotes only identifiers that are reserved words in its own major version. This sometimes results in compatibility issues when dealing with servers of other versions that may have slightly different sets of reserved words. Using --quote-all-identifiers
prevents such issues, at the price of a harder-to-read dump script.
-
--rows-per-insert=`nrows
`
Dump data as INSERT
commands (rather than COPY
). Controls the maximum number of rows per INSERT
command. The value specified must be a number greater than zero. Any error during restoring will cause only rows that are part of the problematic INSERT
to be lost, rather than the entire table contents.
-
--use-set-session-authorization
Output SQL-standard SET SESSION AUTHORIZATION
commands instead of ALTER OWNER
commands to determine object ownership. This makes the dump more standards compatible, but depending on the history of the objects in the dump, might not restore properly.
-
-?
--help
Show help about pg_dumpall command line arguments, and exit.
The following command-line options control the database connection parameters.
-
-d connstr
--dbname=connstr
Specifies parameters used to connect to the server, as a connection string; these will override any conflicting command line options.The option is called --dbname
for consistency with other client applications, but because pg_dumpall needs to connect to many databases, the database name in the connection string will be ignored. Use the -l
option to specify the name of the database used for the initial connection, which will dump global objects and discover what other databases should be dumped.
-
-h host
--host=host
Specifies the host name of the machine on which the database server is running. If the value begins with a slash, it is used as the directory for the Unix domain socket. The default is taken from the PGHOST
environment variable, if set, else a Unix domain socket connection is attempted.
-
-l dbname
--database=dbname
Specifies the name of the database to connect to for dumping global objects and discovering what other databases should be dumped. If not specified, the postgres
database will be used, and if that does not exist, template1
will be used.
-
-p `port
`--port=`port
`
Specifies the TCP port or local Unix domain socket file extension on which the server is listening for connections. Defaults to the PGPORT
environment variable, if set, or a compiled-in default.
-
-U username
--username=username
User name to connect as.
-
-w
--no-password
Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a .pgpass
file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.
-
-W
--password
Force pg_dumpall to prompt for a password before connecting to a database.This option is never essential, since pg_dumpall will automatically prompt for a password if the server demands password authentication. However, pg_dumpall will waste a connection attempt finding out that the server wants a password. In some cases it is worth typing -W
to avoid the extra connection attempt.Note that the password prompt will occur again for each database to be dumped. Usually, it’s better to set up a ~/.pgpass
file than to rely on manual password entry.
-
--role=`rolename
`
Specifies a role name to be used to create the dump. This option causes pg_dumpall to issue a SET ROLE
rolename
command after connecting to the database. It is useful when the authenticated user (specified by -U
) lacks privileges needed by pg_dumpall, but can switch to a role with the required rights. Some installations have a policy against logging in directly as a superuser, and use of this option allows dumps to be made without violating the policy.
2.12.3. Environment
-
PGHOST
PGOPTIONS
PGPORT
PGUSER
Default connection parameters
-
PG_COLOR
Specifies whether to use color in diagnostic messages. Possible values are always
, auto
and never
.
This utility, like most other IvorySQL utilities, also uses the environment variables supported by libpq
2.12.4. Notes
Since pg_dumpall calls pg_dump internally, some diagnostic messages will refer to pg_dump.
The --clean
option can be useful even when your intention is to restore the dump script into a fresh cluster. Use of --clean
authorizes the script to drop and re-create the built-in postgres
and template1
databases, ensuring that those databases will retain the same properties (for instance, locale and encoding) that they had in the source cluster. Without the option, those databases will retain their existing database-level properties, as well as any pre-existing contents.
Once restored, it is wise to run ANALYZE
on each database so the optimizer has useful statistics. You can also run vacuumdb -a -z
to analyze all databases.
The dump script should not be expected to run completely without errors. In particular, because the script will issue CREATE ROLE
for every role existing in the source cluster, it is certain to get a “role already exists” error for the bootstrap superuser, unless the destination cluster was initialized with a different bootstrap superuser name. This error is harmless and should be ignored. Use of the --clean
option is likely to produce additional harmless error messages about non-existent objects, although you can minimize those by adding --if-exists
.
pg_dumpall requires all needed tablespace directories to exist before the restore; otherwise, database creation will fail for databases in non-default locations.
2.12.5. Examples
To dump all databases:
$ pg_dumpall > db.out
To restore database(s) from this file, you can use:
$ psql -f db.out postgres
It is not important to which database you connect here since the script file created by pg_dumpall will contain the appropriate commands to create and connect to the saved databases. An exception is that if you specified --clean
, you must connect to the postgres
database initially; the script will attempt to drop other databases immediately, and that will fail for the database you are connected to.
2.13. pg_isready
pg_isready — check the connection status of a IvorySQL server
2.13.2. Options
-
-d dbname
--dbname=dbname
Specifies the name of the database to connect to. The dbname
can be a connection string. If so, connection string parameters will override any conflicting command line options.
-
-h hostname
--host=hostname
Specifies the host name of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the Unix-domain socket.
-
-p port
--port=port
Specifies the TCP port or the local Unix-domain socket file extension on which the server is listening for connections. Defaults to the value of the PGPORT
environment variable or, if not set, to the port specified at compile time, usually 5432.
-
-q
--quiet
Do not display status message. This is useful when scripting.
-
-t seconds
--timeout=seconds
The maximum number of seconds to wait when attempting connection before returning that the server is not responding. Setting to 0 disables. The default is 3 seconds.
-
-U username
--username=username
Connect to the database as the user username
instead of the default.
-
-V
--version
Print the pg_isready version and exit.
-
-?
--help
Show help about pg_isready command line arguments, and exit.
2.13.3. Exit Status
pg_isready returns 0
to the shell if the server is accepting connections normally, 1
if the server is rejecting connections (for example during startup), 2
if there was no response to the connection attempt, and 3
if no attempt was made (for example due to invalid parameters).
2.13.4. Environment
pg_isready
, like most other IvorySQL utilities, also uses the environment variables supported by libpq .
The environment variable PG_COLOR
specifies whether to use color in diagnostic messages. Possible values are always
, auto
and never
.
2.13.5. Notes
It is not necessary to supply correct user name, password, or database name values to obtain the server status; however, if incorrect values are provided, the server will log a failed connection attempt.
2.13.6. Examples
Standard Usage:
$ pg_isready
/tmp:5432 - accepting connections
$ echo $?
0
Running with connection parameters to a IvorySQL cluster in startup:
$ pg_isready -h localhost -p 5433
localhost:5433 - rejecting connections
$ echo $?
1
Running with connection parameters to a non-responsive IvorySQL cluster:
$ pg_isready -h someremotehost
someremotehost:5432 - no response
$ echo $?
2
2.14. pg_receivewal
pg_receivewal — stream write-ahead logs from a IvorySQL server
2.14.2. Options
-
-D directory
--directory=directory
Directory to write the output to.This parameter is required.
-
-E lsn
--endpos=lsn
Automatically stop replication and exit with normal exit status 0 when receiving reaches the specified LSN.If there is a record with LSN exactly equal to lsn
, the record will be processed.
-
--if-not-exists
Do not error out when --create-slot
is specified and a slot with the specified name already exists.
-
-n
--no-loop
Don’t loop on connection errors. Instead, exit right away with an error.
-
--no-sync
This option causes pg_receivewal
to not force WAL data to be flushed to disk. This is faster, but means that a subsequent operating system crash can leave the WAL segments corrupt. Generally, this option is useful for testing but should not be used when doing WAL archiving on a production deployment.This option is incompatible with --synchronous
.
-
-s interval
--status-interval=interval
Specifies the number of seconds between status packets sent back to the server. This allows for easier monitoring of the progress from server. A value of zero disables the periodic status updates completely, although an update will still be sent when requested by the server, to avoid timeout disconnect. The default value is 10 seconds.
-
-S slotname
--slot=slotname
Require pg_receivewal to use an existing replication slot, When this option is used, pg_receivewal will report a flush position to the server, indicating when each segment has been synchronized to disk so that the server can remove that segment if it is not otherwise needed.When the replication client of pg_receivewal is configured on the server as a synchronous standby, then using a replication slot will report the flush position to the server, but only when a WAL file is closed. Therefore, that configuration will cause transactions on the primary to wait for a long time and effectively not work satisfactorily. The option --synchronous
(see below) must be specified in addition to make this work correctly.
-
--synchronous
Flush the WAL data to disk immediately after it has been received. Also send a status packet back to the server immediately after flushing, regardless of --status-interval
.This option should be specified if the replication client of pg_receivewal is configured on the server as a synchronous standby, to ensure that timely feedback is sent to the server.
-
-v
--verbose
Enables verbose mode.
-
-Z level
-Z method[:*detail*]
--compress=level
--compress=method[:*detail*]
Enables compression of write-ahead logs.The compression method can be set to gzip
, lz4
(if IvorySQL was compiled with --with-lz4
) or none
for no compression. A compression detail string can optionally be specified. If the detail string is an integer, it specifies the compression level. Otherwise, it should be a comma-separated list of items, each of the form keyword
or keyword=value
. Currently, the only supported keyword is level
.If no compression level is specified, the default compression level will be used. If only a level is specified without mentioning an algorithm, gzip
compression will be used if the level is greater than 0, and no compression will be used if the level is 0.The suffix .gz
will automatically be added to all filenames when using gzip
, and the suffix .lz4
is added when using lz4
.
The following command-line options control the database connection parameters.
-
-d connstr
--dbname=connstr
Specifies parameters used to connect to the server, as a connection string; these will override any conflicting command line options.The option is called --dbname
for consistency with other client applications, but because pg_receivewal doesn’t connect to any particular database in the cluster, database name in the connection string will be ignored.
-
-h `host
`--host=`host
`
Specifies the host name of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the Unix domain socket. The default is taken from the PGHOST
environment variable, if set, else a Unix domain socket connection is attempted.
-
-p `port
`--port=`port
`
Specifies the TCP port or local Unix domain socket file extension on which the server is listening for connections. Defaults to the PGPORT
environment variable, if set, or a compiled-in default.
-
-U `username
`--username=`username
`
User name to connect as.
-
-w
--no-password
Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a .pgpass
file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.
-
-W
--password
Force pg_receivewal to prompt for a password before connecting to a database.This option is never essential, since pg_receivewal will automatically prompt for a password if the server demands password authentication. However, pg_receivewal will waste a connection attempt finding out that the server wants a password. In some cases it is worth typing -W
to avoid the extra connection attempt.
pg_receivewal can perform one of the two following actions in order to control physical replication slots:
-
--create-slot
Create a new physical replication slot with the name specified in --slot
, then exit.
-
--drop-slot
Drop the replication slot with the name specified in --slot
, then exit.
Other options are also available:
-
-V
--version
Print the pg_receivewal version and exit.
-
-?
--help
Show help about pg_receivewal command line arguments, and exit.
2.14.3. Exit Status
pg_receivewal will exit with status 0 when terminated by the SIGINT signal. (That is the normal way to end it. Hence it is not an error.) For fatal errors or other signals, the exit status will be nonzero.
2.14.4. Environment
This utility, like most other IvorySQL utilities, uses the environment variables supported by libpq
The environment variable PG_COLOR
specifies whether to use color in diagnostic messages. Possible values are always
, auto
and never
.
2.14.5. Notes
When using pg_receivewal instead of archive_command or archive_library as the main WAL backup method, it is strongly recommended to use replication slots. Otherwise, the server is free to recycle or remove write-ahead log files before they are backed up, because it does not have any information, either from archive_command or archive_library or the replication slots, about how far the WAL stream has been archived. Note, however, that a replication slot will fill up the server’s disk space if the receiver does not keep up with fetching the WAL data.
pg_receivewal will preserve group permissions on the received WAL files if group permissions are enabled on the source cluster.
2.15. pg_recvlogical
pg_recvlogical — control IvorySQL logical decoding streams
2.15.2. Options
At least one of the following options must be specified to select an action:
-
--create-slot
Create a new logical replication slot with the name specified by --slot
, using the output plugin specified by --plugin
, for the database specified by --dbname
.The --two-phase
can be specified with --create-slot
to enable decoding of prepared transactions.
-
--drop-slot
Drop the replication slot with the name specified by --slot
, then exit.
-
--start
Begin streaming changes from the logical replication slot specified by --slot
, continuing until terminated by a signal. If the server side change stream ends with a server shutdown or disconnect, retry in a loop unless --no-loop
is specified.The stream format is determined by the output plugin specified when the slot was created.The connection must be to the same database used to create the slot.
--create-slot
and --start
can be specified together. --drop-slot
cannot be combined with another action.
The following command-line options control the location and format of the output and other replication behavior:
-
-E lsn
--endpos=lsn
In --start
mode, automatically stop replication and exit with normal exit status 0 when receiving reaches the specified LSN. If specified when not in --start
mode, an error is raised.If there’s a record with LSN exactly equal to lsn
, the record will be output.The --endpos
option is not aware of transaction boundaries and may truncate output partway through a transaction. Any partially output transaction will not be consumed and will be replayed again when the slot is next read from. Individual messages are never truncated.
-
-f filename
--file=filename
Write received and decoded transaction data into this file. Use -
for stdout.
-
-F interval_seconds
--fsync-interval=interval_seconds
Specifies how often pg_recvlogical should issue fsync()
calls to ensure the output file is safely flushed to disk.The server will occasionally request the client to perform a flush and report the flush position to the server. This setting is in addition to that, to perform flushes more frequently.Specifying an interval of 0
disables issuing fsync()
calls altogether, while still reporting progress to the server. In this case, data could be lost in the event of a crash.
-
-I lsn
--startpos=lsn
In --start
mode, start replication from the given LSN. For details on the effect of this.
-
--if-not-exists
Do not error out when --create-slot
is specified and a slot with the specified name already exists.
-
-n
--no-loop
When the connection to the server is lost, do not retry in a loop, just exit.
-
-o name[=value]
--option=name[=value]
Pass the option name
to the output plugin with, if specified, the option value value
. Which options exist and their effects depends on the used output plugin.
-
-P plugin
--plugin=plugin
When creating a slot, use the specified logical decoding output plugin. This option has no effect if the slot already exists.
-
-s interval_seconds
--status-interval=interval_seconds
This option has the same effect as the option of the same name in pg_receivewal. See the description there.
-
-S slot_name
--slot=slot_name
In --start
mode, use the existing logical replication slot named slot_name
. In --create-slot
mode, create the slot with this name. In --drop-slot
mode, delete the slot with this name.
-
-t
--two-phase
Enables decoding of prepared transactions. This option may only be specified with --create-slot
-
-v
--verbose
Enables verbose mode.
The following command-line options control the database connection parameters.
-
-d dbname
--dbname=dbname
The database to connect to. See the description of the actions for what this means in detail. The dbname
can be a connection string. If so, connection string parameters will override any conflicting command line options. Defaults to the user name.
-
-h hostname-or-ip
--host=hostname-or-ip
Specifies the host name of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the Unix domain socket. The default is taken from the PGHOST
environment variable, if set, else a Unix domain socket connection is attempted.
-
-p port
--port=port
Specifies the TCP port or local Unix domain socket file extension on which the server is listening for connections. Defaults to the PGPORT
environment variable, if set, or a compiled-in default.
-
-U user
--username=user
User name to connect as. Defaults to current operating system user name.
-
-w
--no-password
Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a .pgpass
file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.
-
-W
--password
Force pg_recvlogical to prompt for a password before connecting to a database.This option is never essential, since pg_recvlogical will automatically prompt for a password if the server demands password authentication. However, pg_recvlogical will waste a connection attempt finding out that the server wants a password. In some cases it is worth typing -W
to avoid the extra connection attempt.
The following additional options are available:
-
-V
--version
Print the pg_recvlogical version and exit.
-
-?
--help
Show help about pg_recvlogical command line arguments, and exit.
2.16. pg_restore
pg_restore — restore a IvorySQL database from an archive file created by pg_dump
2.16.2. Options
pg_restore accepts the following command line arguments.
-
filename
Specifies the location of the archive file (or directory, for a directory-format archive) to be restored. If not specified, the standard input is used.
-
-a
--data-only
Restore only the data, not the schema (data definitions). Table data, large objects, and sequence values are restored, if present in the archive.This option is similar to, but for historical reasons not identical to, specifying --section=data
.
-
-c
--clean
Clean (drop) database objects before recreating them. (Unless --if-exists
is used, this might generate some harmless error messages, if any objects were not present in the destination database.)
-
-C
--create
Create the database before restoring into it. If --clean
is also specified, drop and recreate the target database before connecting to it.With --create
, pg_restore also restores the database’s comment if any, and any configuration variable settings that are specific to this database, that is, any ALTER DATABASE … SET …
and ALTER ROLE … IN DATABASE … SET …
commands that mention this database. Access privileges for the database itself are also restored, unless --no-acl
is specified.When this option is used, the database named with -d
is used only to issue the initial DROP DATABASE
and CREATE DATABASE
commands. All data is restored into the database name that appears in the archive.
-
-d dbname
--dbname=dbname
Connect to database dbname
and restore directly into the database. The dbname
can be a connection string. If so, connection string parameters will override any conflicting command line options.
-
-e
--exit-on-error
Exit if an error is encountered while sending SQL commands to the database. The default is to continue and to display a count of errors at the end of the restoration.
-
-f `filename
`--file=`filename
`
Specify output file for generated script, or for the listing when used with -l
. Use -
for stdout.
-
-F format
--format=format
Specify format of the archive. It is not necessary to specify the format, since pg_restore will determine the format automatically. If specified, it can be one of the following:`c` custom`The archive is in the custom format of pg_dump.`d
directory`The archive is a directory archive.`t
tar`The archive is a `tar
archive.
-
-I index
--index=index
Restore definition of named index only. Multiple indexes may be specified with multiple -I
switches.
-
-j number-of-jobs
--jobs=number-of-jobs
Run the most time-consuming steps of pg_restore — those that load data, create indexes, or create constraints — concurrently, using up to number-of-jobs
concurrent sessions. This option can dramatically reduce the time to restore a large database to a server running on a multiprocessor machine. This option is ignored when emitting a script rather than connecting directly to a database server.Each job is one process or one thread, depending on the operating system, and uses a separate connection to the server.The optimal value for this option depends on the hardware setup of the server, of the client, and of the network. Factors include the number of CPU cores and the disk setup. A good place to start is the number of CPU cores on the server, but values larger than that can also lead to faster restore times in many cases. Of course, values that are too high will lead to decreased performance because of thrashing.Only the custom and directory archive formats are supported with this option. The input must be a regular file or directory (not, for example, a pipe or standard input). Also, multiple jobs cannot be used together with the option --single-transaction
.
-
-l
--list
List the table of contents of the archive. The output of this operation can be used as input to the -L
option. Note that if filtering switches such as -n
or -t
are used with -l
, they will restrict the items listed.
-
-L list-file
--use-list=list-file
Restore only those archive elements that are listed in list-file
, and restore them in the order they appear in the file. Note that if filtering switches such as -n
or -t
are used with -L
, they will further restrict the items restored.list-file
is normally created by editing the output of a previous -l
operation. Lines can be moved or removed, and can also be commented out by placing a semicolon (;
) at the start of the line. See below for examples.
-
-n schema
--schema=schema
Restore only objects that are in the named schema. Multiple schemas may be specified with multiple -n
switches. This can be combined with the -t
option to restore just a specific table.
-
-N schema
--exclude-schema=schema
Do not restore objects that are in the named schema. Multiple schemas to be excluded may be specified with multiple -N
switches.When both -n
and -N
are given for the same schema name, the -N
switch wins and the schema is excluded.
-
-O
--no-owner
Do not output commands to set ownership of objects to match the original database. By default, pg_restore issues ALTER OWNER
or SET SESSION AUTHORIZATION
statements to set ownership of created schema elements. These statements will fail unless the initial connection to the database is made by a superuser (or the same user that owns all of the objects in the script). With -O
, any user name can be used for the initial connection, and this user will own all the created objects.
-
-P function-name(argtype [, …])
--function=function-name(argtype [, …])
Restore the named function only. Be careful to spell the function name and arguments exactly as they appear in the dump file’s table of contents. Multiple functions may be specified with multiple -P
switches.
-
-R
--no-reconnect
This option is obsolete but still accepted for backwards compatibility.
-
-s
--schema-only
Restore only the schema (data definitions), not data, to the extent that schema entries are present in the archive.This option is the inverse of --data-only
. It is similar to, but for historical reasons not identical to, specifying --section=pre-data --section=post-data
.(Do not confuse this with the --schema
option, which uses the word “schema” in a different meaning.)
-
-S username
--superuser=username
Specify the superuser user name to use when disabling triggers. This is relevant only if --disable-triggers
is used.
-
-t table
--table=table
Restore definition and/or data of only the named table. For this purpose, “table” includes views, materialized views, sequences, and foreign tables. Multiple tables can be selected by writing multiple -t
switches. This option can be combined with the -n
option to specify table(s) in a particular schema.NoteWhen -t
is specified, pg_restore makes no attempt to restore any other database objects that the selected table(s) might depend upon. Therefore, there is no guarantee that a specific-table restore into a clean database will succeed.NoteThis flag does not behave identically to the -t
flag of pg_dump. There is not currently any provision for wild-card matching in pg_restore, nor can you include a schema name within its -t
. And, while pg_dump’s -t
flag will also dump subsidiary objects (such as indexes) of the selected table(s), pg_restore’s -t
flag does not include such subsidiary objects.
-
-T trigger
--trigger=trigger
Restore named trigger only. Multiple triggers may be specified with multiple -T
switches.
-
-v
--verbose
Specifies verbose mode. This will cause pg_restore to output detailed object comments and start/stop times to the output file, and progress messages to standard error. Repeating the option causes additional debug-level messages to appear on standard error.
-
-V
--version
Print the pg_restore version and exit.
-
-x
--no-privileges
--no-acl
Prevent restoration of access privileges (grant/revoke commands).
-
-1
--single-transaction
Execute the restore as a single transaction (that is, wrap the emitted commands in BEGIN
/ COMMIT
). This ensures that either all the commands complete successfully, or no changes are applied. This option implies --exit-on-error
.
-
--disable-triggers
This option is relevant only when performing a data-only restore. It instructs pg_restore to execute commands to temporarily disable triggers on the target tables while the data is restored. Use this if you have referential integrity checks or other triggers on the tables that you do not want to invoke during data restore.Presently, the commands emitted for --disable-triggers
must be done as superuser. So you should also specify a superuser name with -S
or, preferably, run pg_restore as a IvorySQL superuser.
-
--enable-row-security
This option is relevant only when restoring the contents of a table which has row security. By default, pg_restore will set row_security to off, to ensure that all data is restored in to the table. If the user does not have sufficient privileges to bypass row security, then an error is thrown. This parameter instructs pg_restore to set row_security to on instead, allowing the user to attempt to restore the contents of the table with row security enabled. This might still fail if the user does not have the right to insert the rows from the dump into the table.Note that this option currently also requires the dump be in INSERT
format, as COPY FROM
does not support row security.
-
--if-exists
Use conditional commands (i.e., add an IF EXISTS
clause) to drop database objects. This option is not valid unless --clean
is also specified.
-
--no-comments
Do not output commands to restore comments, even if the archive contains them.
-
--no-data-for-failed-tables
By default, table data is restored even if the creation command for the table failed (e.g., because it already exists). With this option, data for such a table is skipped. This behavior is useful if the target database already contains the desired table contents. For example, auxiliary tables for IvorySQL extensions such as PostGIS might already be loaded in the target database; specifying this option prevents duplicate or obsolete data from being loaded into them.This option is effective only when restoring directly into a database, not when producing SQL script output.
-
--no-publications
Do not output commands to restore publications, even if the archive contains them.
-
--no-security-labels
Do not output commands to restore security labels, even if the archive contains them.
-
--no-subscriptions
Do not output commands to restore subscriptions, even if the archive contains them.
-
--no-table-access-method
Do not output commands to select table access methods. With this option, all objects will be created with whichever access method is the default during restore.
-
--no-tablespaces
Do not output commands to select tablespaces. With this option, all objects will be created in whichever tablespace is the default during restore.
-
--section=`sectionname
`
Only restore the named section. The section name can be pre-data
, data
, or post-data
. This option can be specified more than once to select multiple sections. The default is to restore all sections.The data section contains actual table data as well as large-object definitions. Post-data items consist of definitions of indexes, triggers, rules and constraints other than validated check constraints. Pre-data items consist of all other data definition items.
-
--strict-names
Require that each schema (-n
/--schema
) and table (-t
/--table
) qualifier match at least one schema/table in the backup file.
-
--use-set-session-authorization
Output SQL-standard SET SESSION AUTHORIZATION
commands instead of ALTER OWNER
commands to determine object ownership. This makes the dump more standards-compatible, but depending on the history of the objects in the dump, might not restore properly.
-
-?
--help
Show help about pg_restore command line arguments, and exit.
pg_restore also accepts the following command line arguments for connection parameters:
-
-h host
--host=host
Specifies the host name of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the Unix domain socket. The default is taken from the PGHOST
environment variable, if set, else a Unix domain socket connection is attempted.
-
-p port
--port=port
Specifies the TCP port or local Unix domain socket file extension on which the server is listening for connections. Defaults to the PGPORT
environment variable, if set, or a compiled-in default.
-
-U username
--username=username
User name to connect as.
-
-w
--no-password
Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a .pgpass
file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.
-
-W
--password
Force pg_restore to prompt for a password before connecting to a database.This option is never essential, since pg_restore will automatically prompt for a password if the server demands password authentication. However, pg_restore will waste a connection attempt finding out that the server wants a password. In some cases it is worth typing -W
to avoid the extra connection attempt.
-
--role=`rolename
`
Specifies a role name to be used to perform the restore. This option causes pg_restore to issue a SET ROLE
rolename
command after connecting to the database. It is useful when the authenticated user (specified by -U
) lacks privileges needed by pg_restore, but can switch to a role with the required rights. Some installations have a policy against logging in directly as a superuser, and use of this option allows restores to be performed without violating the policy.
2.16.3. Environment
-
PGHOST
PGOPTIONS
PGPORT
PGUSER
Default connection parameters
-
PG_COLOR
Specifies whether to use color in diagnostic messages. Possible values are always
, auto
and never
.
This utility, like most other IvorySQL utilities, also uses the environment variables supported by libpq. However, it does not read PGDATABASE
when a database name is not supplied.
2.16.4. Diagnostics
When a direct database connection is specified using the -d
option, pg_restore internally executes SQL statements. If you have problems running pg_restore, make sure you are able to select information from the database using, for example, psql. Also, any default connection settings and environment variables used by the libpq front-end library will apply.
2.16.5. Notes
If your installation has any local additions to the template1
database, be careful to load the output of pg_restore into a truly empty database; otherwise you are likely to get errors due to duplicate definitions of the added objects. To make an empty database without any local additions, copy from template0
not template1
, for example:
CREATE DATABASE foo WITH TEMPLATE template0;
The limitations of pg_restore are detailed below.
-
When restoring data to a pre-existing table and the option
--disable-triggers
is used, pg_restore emits commands to disable triggers on user tables before inserting the data, then emits commands to re-enable them after the data has been inserted. If the restore is stopped in the middle, the system catalogs might be left in the wrong state. -
pg_restore cannot restore large objects selectively; for instance, only those for a specific table. If an archive contains large objects, then all large objects will be restored, or none of them if they are excluded via
-L
,-t
, or other options.
See also the pg_dump documentation for details on limitations of pg_dump.
2.16.6. Examples
Assume we have dumped a database called mydb
into a custom-format dump file:
$ pg_dump -Fc mydb > db.dump
To drop the database and recreate it from the dump:
$ dropdb mydb
$ pg_restore -C -d postgres db.dump
The database named in the -d
switch can be any database existing in the cluster; pg_restore only uses it to issue the CREATE DATABASE
command for mydb
. With -C
, data is always restored into the database name that appears in the dump file.
To restore the dump into a new database called newdb
:
$ createdb -T template0 newdb
$ pg_restore -d newdb db.dump
Notice we don’t use -C
, and instead connect directly to the database to be restored into. Also note that we clone the new database from template0
not template1
, to ensure it is initially empty.
To reorder database items, it is first necessary to dump the table of contents of the archive:
$ pg_restore -l db.dump > db.list
The listing file consists of a header and one line for each item, e.g.:
;
; Archive created at Mon Sep 14 13:55:39 2009
; dbname: DBDEMOS
; TOC Entries: 81
; Compression: 9
; Dump Version: 1.10-0
; Format: CUSTOM
; Integer: 4 bytes
; Offset: 8 bytes
; Dumped from database version: 8.3.5
; Dumped by pg_dump version: 8.3.8
;
;
; Selected TOC Entries:
;
3; 2615 2200 SCHEMA - public pasha
1861; 0 0 COMMENT - SCHEMA public pasha
1862; 0 0 ACL - public pasha
317; 1247 17715 TYPE public composite pasha
319; 1247 25899 DOMAIN public domain0 pasha
Semicolons start a comment, and the numbers at the start of lines refer to the internal archive ID assigned to each item.
Lines in the file can be commented out, deleted, and reordered. For example:
10; 145433 TABLE map_resolutions postgres
;2; 145344 TABLE species postgres
;4; 145359 TABLE nt_header postgres
6; 145402 TABLE species_records postgres
;8; 145416 TABLE ss_old postgres
could be used as input to pg_restore and would only restore items 10 and 6, in that order:
$ pg_restore -L db.list db.dump
2.17. pg_verifybackup
pg_verifybackup — verify the integrity of a base backup of a IvorySQL cluster
2.17.2. Options
pg_verifybackup accepts the following command-line arguments:
-
-e
--exit-on-error
Exit as soon as a problem with the backup is detected. If this option is not specified, pg_verifybackup
will continue checking the backup even after a problem has been detected, and will report all problems detected as errors.
-
-i path
--ignore=path
Ignore the specified file or directory, which should be expressed as a relative path name, when comparing the list of data files actually present in the backup to those listed in the backup_manifest
file. If a directory is specified, this option affects the entire subtree rooted at that location. Complaints about extra files, missing files, file size differences, or checksum mismatches will be suppressed if the relative path name matches the specified path name. This option can be specified multiple times.
-
-m path
--manifest-path=path
Use the manifest file at the specified path, rather than one located in the root of the backup directory.
-
-n
--no-parse-wal
Don’t attempt to parse write-ahead log data that will be needed to recover from this backup.
-
-q
--quiet
Don’t print anything when a backup is successfully verified.
-
-s
--skip-checksums
Do not verify data file checksums. The presence or absence of files and the sizes of those files will still be checked. This is much faster, because the files themselves do not need to be read.
-
-w path
--wal-directory=path
Try to parse WAL files stored in the specified directory, rather than in pg_wal
. This may be useful if the backup is stored in a separate location from the WAL archive.
Other options are also available:
-
-V
--version
Print the pg_verifybackup version and exit.
-
-?
--help
Show help about pg_verifybackup command line arguments, and exit.
2.17.3. Examples
To create a base backup of the server at mydbserver
and verify the integrity of the backup:
$ pg_basebackup -h mydbserver -D /usr/local/pgsql/data
$ pg_verifybackup /usr/local/pgsql/data
To create a base backup of the server at mydbserver
, move the manifest somewhere outside the backup directory, and verify the backup:
$ pg_basebackup -h mydbserver -D /usr/local/pgsql/backup1234
$ mv /usr/local/pgsql/backup1234/backup_manifest /my/secure/location/backup_manifest.1234
$ pg_verifybackup -m /my/secure/location/backup_manifest.1234 /usr/local/pgsql/backup1234
To verify a backup while ignoring a file that was added manually to the backup directory, and also skipping checksum verification:
$ pg_basebackup -h mydbserver -D /usr/local/pgsql/data
$ edit /usr/local/pgsql/data/note.to.self
$ pg_verifybackup --ignore=note.to.self --skip-checksums /usr/local/pgsql/data
2.18. psql
psql — IvorySQL interactive terminal
2.18.2. Options
-
-a
--echo-all
Print all nonempty input lines to standard output as they are read. (This does not apply to lines read interactively.) This is equivalent to setting the variable ECHO
to all
.
-
-A
--no-align
Switches to unaligned output mode. (The default output mode is aligned
.) This is equivalent to \pset format unaligned
.
-
-b
--echo-errors
Print failed SQL commands to standard error output. This is equivalent to setting the variable ECHO
to errors
.
-
-c command
--command=command
Specifies that psql is to execute the given command string, command
. This option can be repeated and combined in any order with the -f
option. When either -c
or -f
is specified, psql does not read commands from standard input; instead it terminates after processing all the -c
and -f
options in sequence.command
must be either a command string that is completely parsable by the server (i.e., it contains no psql-specific features), or a single backslash command. Thus you cannot mix SQL and psql meta-commands within a -c
option. To achieve that, you could use repeated -c
options or pipe the string into psql, for example:`psql -c '\x' -c 'SELECT * FROM foo;' or`echo '\x \\ SELECT * FROM foo;' | psql `(
\\` is the separator meta-command.)Each SQL command string passed to -c
is sent to the server as a single request. Because of this, the server executes it as a single transaction even if the string contains multiple SQL commands, unless there are explicit BEGIN
/COMMIT
commands included in the string to divide it into multiple transactions.If having several commands executed in one transaction is not desired, use repeated -c
commands or feed multiple commands to psql’s standard input, either using echo as illustrated above, or via a shell here-document, for example:`psql <<EOF \x SELECT * FROM foo; EOF `
-
--csv
Switches to CSV (Comma-Separated Values) output mode. This is equivalent to \pset format csv
.
-
-d dbname
--dbname=dbname
Specifies the name of the database to connect to. This is equivalent to specifying dbname
as the first non-option argument on the command line. The dbname
can be a connection string. If so, connection string parameters will override any conflicting command line options.
-
-e
--echo-queries
Copy all SQL commands sent to the server to standard output as well. This is equivalent to setting the variable ECHO
to queries
.
-
-E
--echo-hidden
Echo the actual queries generated by \d
and other backslash commands. You can use this to study psql’s internal operations. This is equivalent to setting the variable ECHO_HIDDEN
to on
.
-
-f filename
--file=filename
Read commands from the file filename
, rather than standard input. This option can be repeated and combined in any order with the -c
option. When either -c
or -f
is specified, psql does not read commands from standard input; instead it terminates after processing all the -c
and -f
options in sequence. Except for that, this option is largely equivalent to the meta-command \i
.If filename
is -
(hyphen), then standard input is read until an EOF indication or \q
meta-command. This can be used to intersperse interactive input with input from files. Note however that Readline is not used in this case (much as if -n
had been specified).Using this option is subtly different from writing psql < `filename
. In general, both will do what you expect, but using `-f
enables some nice features such as error messages with line numbers. There is also a slight chance that using this option will reduce the start-up overhead. On the other hand, the variant using the shell’s input redirection is (in theory) guaranteed to yield exactly the same output you would have received had you entered everything by hand.
-
-F separator
--field-separator=separator
Use separator
as the field separator for unaligned output. This is equivalent to \pset fieldsep
or \f
.
-
-h hostname
--host=hostname
Specifies the host name of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the Unix-domain socket.
-
-H
--html
Switches to HTML output mode. This is equivalent to \pset format html
or the \H
command.
-
-l
--list
List all available databases, then exit. Other non-connection options are ignored. This is similar to the meta-command \list
.When this option is used, psql will connect to the database postgres
, unless a different database is named on the command line (option -d
or non-option argument, possibly via a service entry, but not via an environment variable).
-
-L filename
--log-file=filename
Write all query output into file filename
, in addition to the normal output destination.
-
-n
--no-readline
Do not use Readline for line editing and do not use the command history
-
-o filename
--output=filename
Put all query output into file filename
. This is equivalent to the command \o
.
-
-p port
--port=port
Specifies the TCP port or the local Unix-domain socket file extension on which the server is listening for connections. Defaults to the value of the PGPORT
environment variable or, if not set, to the port specified at compile time, usually 5432.
-
-P assignment
--pset=assignment
Specifies printing options, in the style of \pset
. Note that here you have to separate name and value with an equal sign instead of a space. For example, to set the output format to LaTeX, you could write -P format=latex
.
-
-q
--quiet
Specifies that psql should do its work quietly. By default, it prints welcome messages and various informational output. If this option is used, none of this happens. This is useful with the -c
option. This is equivalent to setting the variable QUIET
to on
.
-
-R separator
--record-separator=separator
Use separator
as the record separator for unaligned output. This is equivalent to \pset recordsep
.
-
-s
--single-step
Run in single-step mode. That means the user is prompted before each command is sent to the server, with the option to cancel execution as well. Use this to debug scripts.
-
-S
--single-line
Runs in single-line mode where a newline terminates an SQL command, as a semicolon does.NoteThis mode is provided for those who insist on it, but you are not necessarily encouraged to use it. In particular, if you mix SQL and meta-commands on a line the order of execution might not always be clear to the inexperienced user.
-
-t
--tuples-only
Turn off printing of column names and result row count footers, etc. This is equivalent to \t
or \pset tuples_only
.
-
-T table_options
--table-attr=table_options
Specifies options to be placed within the HTML table
tag. See \pset tableattr
for details.
-
-U `username
`--username=`username
`
Connect to the database as the user username
instead of the default. (You must have permission to do so, of course.)
-
-v assignment
--set=assignment
--variable=assignment
Perform a variable assignment, like the \set
meta-command. Note that you must separate name and value, if any, by an equal sign on the command line. To unset a variable, leave off the equal sign. To set a variable with an empty value, use the equal sign but leave off the value. These assignments are done during command line processing, so variables that reflect connection state will get overwritten later.
-
-V
--version
Print the psql version and exit.
-
-w
--no-password
Never issue a password prompt. If the server requires password authentication and a password is not available from other sources such as a .pgpass
file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.Note that this option will remain set for the entire session, and so it affects uses of the meta-command \connect
as well as the initial connection attempt.
-
-W
--password
Force psql to prompt for a password before connecting to a database, even if the password will not be used.If the server requires password authentication and a password is not available from other sources such as a .pgpass
file, psql will prompt for a password in any case. However, psql will waste a connection attempt finding out that the server wants a password. In some cases it is worth typing -W
to avoid the extra connection attempt.Note that this option will remain set for the entire session, and so it affects uses of the meta-command \connect
as well as the initial connection attempt.
-
-x
--expanded
Turn on the expanded table formatting mode. This is equivalent to \x
or \pset expanded
.
-
-X,
--no-psqlrc
Do not read the start-up file (neither the system-wide psqlrc
file nor the user’s ~/.psqlrc
file).
-
-z
--field-separator-zero
Set the field separator for unaligned output to a zero byte. This is equivalent to \pset fieldsep_zero
.
-
-0
--record-separator-zero
Set the record separator for unaligned output to a zero byte. This is useful for interfacing, for example, with xargs -0
. This is equivalent to \pset recordsep_zero
.
-
-1
--single-transaction
This option can only be used in combination with one or more -c
and/or -f
options. It causes psql to issue a BEGIN
command before the first such option and a COMMIT
command after the last one, thereby wrapping all the commands into a single transaction. If any of the commands fails and the variable ON_ERROR_STOP
was set, a ROLLBACK
command is sent instead. This ensures that either all the commands complete successfully, or no changes are applied.If the commands themselves contain BEGIN
, COMMIT
, or ROLLBACK
, this option will not have the desired effects. Also, if an individual command cannot be executed inside a transaction block, specifying this option will cause the whole transaction to fail.
-
-?
--help[=`topic
]`
Show help about psql and exit. The optional topic
parameter (defaulting to options
) selects which part of psql is explained: commands
describes psql’s backslash commands; options
describes the command-line options that can be passed to psql; and variables
shows help about psql configuration variables.
2.18.3. Exit Status
psql returns 0 to the shell if it finished normally, 1 if a fatal error of its own occurs (e.g., out of memory, file not found), 2 if the connection to the server went bad and the session was not interactive, and 3 if an error occurred in a script and the variable ON_ERROR_STOP
was set.
2.19. reindexdb
reindexdb — reindex a IvorySQL database
2.19.1. Synopsis
reindexdb
[connection-option
…] [option
…] [ -S
| --schema
schema
] … [ -t
| --table
table
] … [ -i
| --index
index
] … [dbname
]
reindexdb` [*`connection-option`*...] [*`option`*...] `-a` | `--all
reindexdb
[connection-option
…] [option
…] -s
| --system
[dbname
]
2.19.2. Options
reindexdb accepts the following command-line arguments:
-
-a
--all
Reindex all databases.
-
--concurrently
Use the CONCURRENTLY
option. See REINDEX, where all the caveats of this option are explained in detail.
-
[-d] dbname
dbname
Specifies the name of the database to be reindexed, when -a
/--all
is not used. If this is not specified, the database name is read from the environment variable PGDATABASE
. If that is not set, the user name specified for the connection is used. The dbname
can be a connection string. If so, connection string parameters will override any conflicting command line options.
-
-e
--echo
Echo the commands that reindexdb generates and sends to the server.
-
-i index
--index=index
Recreate index
only. Multiple indexes can be recreated by writing multiple -i
switches.
-
-j njobs
--jobs=njobs
Execute the reindex commands in parallel by running njobs
commands simultaneously. This option may reduce the processing time but it also increases the load on the database server.reindexdb will open njobs
connections to the database, so make sure your max_connections setting is high enough to accommodate all connections.Note that this option is incompatible with the --index
and --system
options.
-
-q
--quiet
Do not display progress messages.
-
-s
--system
Reindex database’s system catalogs only.
-
-S schema
--schema=schema
Reindex schema
only. Multiple schemas can be reindexed by writing multiple -S
switches.
-
-t table
--table=table
Reindex table
only. Multiple tables can be reindexed by writing multiple -t
switches.
-
--tablespace=tablespace
Specifies the tablespace where indexes are rebuilt. (This name is processed as a double-quoted identifier.)
-
-v
--verbose
Print detailed information during processing.
-
-V
--version
Print the reindexdb version and exit.
-
-?
--help
Show help about reindexdb command line arguments, and exit.
reindexdb also accepts the following command-line arguments for connection parameters:
-
-h host
--host=host
Specifies the host name of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the Unix domain socket.
-
-p port
--port=port
Specifies the TCP port or local Unix domain socket file extension on which the server is listening for connections.
-
-U username
--username=username
User name to connect as.
-
-w
--no-password
Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a .pgpass
file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.
-
-W
--password
Force reindexdb to prompt for a password before connecting to a database.This option is never essential, since reindexdb will automatically prompt for a password if the server demands password authentication. However, reindexdb will waste a connection attempt finding out that the server wants a password. In some cases it is worth typing -W
to avoid the extra connection attempt.
-
--maintenance-db=dbname
Specifies the name of the database to connect to to discover which databases should be reindexed, when -a
/ --all
is used. If not specified, the postgres
database will be used, or if that does not exist, template1
will be used. This can be a connection string. If so, connection string parameters will override any conflicting command line options. Also, connection string parameters other than the database name itself will be re-used when connecting to other databases.
2.19.3. Environment
-
PGDATABASE
PGHOST
PGPORT
PGUSER
Default connection parameters
-
PG_COLOR
Specifies whether to use color in diagnostic messages. Possible values are always
, auto
and never
.
This utility, like most other IvorySQL utilities, also uses the environment variables supported by libpq .
2.20. vacuumdb
vacuumdb — garbage-collect and analyze a IvorySQL database
2.20.1. Synopsis
vacuumdb
[connection-option
…] [option
…] [ -t
| --table
table
[( column
[,…] )] ] … [dbname
]
vacuumdb` [*`connection-option`*...] [*`option`*...] `-a` | `--all
2.20.2. Options
vacuumdb accepts the following command-line arguments:
-
-a
--all
Vacuum all databases.
-
[-d] `dbname
``dbname
`
Specifies the name of the database to be cleaned or analyzed, when -a
/--all
is not used. If this is not specified, the database name is read from the environment variable PGDATABASE
. If that is not set, the user name specified for the connection is used. The dbname
can be a connection string. If so, connection string parameters will override any conflicting command line options.
-
--disable-page-skipping
Disable skipping pages based on the contents of the visibility map.
-
-e
--echo
Echo the commands that vacuumdb generates and sends to the server.
-
-f
--full
Perform “full” vacuuming.
-
-F
--freeze
Aggressively “freeze” tuples.
-
--force-index-cleanup
Always remove index entries pointing to dead tuples.
-
-j njobs
--jobs=njobs
Execute the vacuum or analyze commands in parallel by running njobs
commands simultaneously. This option may reduce the processing time but it also increases the load on the database server.vacuumdb will open njobs
connections to the database, so make sure your max_connections setting is high enough to accommodate all connections.Note that using this mode together with the -f
(FULL
) option might cause deadlock failures if certain system catalogs are processed in parallel.
-
--min-mxid-age mxid_age
Only execute the vacuum or analyze commands on tables with a multixact ID age of at least mxid_age
. This setting is useful for prioritizing tables to process to prevent multixact ID wraparound .For the purposes of this option, the multixact ID age of a relation is the greatest of the ages of the main relation and its associated TOAST table, if one exists. Since the commands issued by vacuumdb will also process the TOAST table for the relation if necessary, it does not need to be considered separately.
-
--min-xid-age xid_age
Only execute the vacuum or analyze commands on tables with a transaction ID age of at least xid_age
. This setting is useful for prioritizing tables to process to prevent transaction ID wraparound.For the purposes of this option, the transaction ID age of a relation is the greatest of the ages of the main relation and its associated TOAST table, if one exists. Since the commands issued by vacuumdb will also process the TOAST table for the relation if necessary, it does not need to be considered separately.
-
--no-index-cleanup
Do not remove index entries pointing to dead tuples.
-
--no-process-toast
Skip the TOAST table associated with the table to vacuum, if any.
-
--no-truncate
Do not truncate empty pages at the end of the table.
-
-P parallel_workers
--parallel=parallel_workers
Specify the number of parallel workers for parallel vacuum. This allows the vacuum to leverage multiple CPUs to process indexes. See VACUUM.
-
-q
--quiet
Do not display progress messages.
-
--skip-locked
Skip relations that cannot be immediately locked for processing
-
-t `table
[ (column
[,…]) ]`--table=`table
[ (column
[,…]) ]`
Clean or analyze table
only. Column names can be specified only in conjunction with the --analyze
or --analyze-only
options. Multiple tables can be vacuumed by writing multiple -t
switches.TipIf you specify columns, you probably have to escape the parentheses from the shell. (See examples below.)
-
-v
--verbose
Print detailed information during processing.
-
-V
--version
Print the vacuumdb version and exit.
-
-z
--analyze
Also calculate statistics for use by the optimizer.
-
-Z
--analyze-only
Only calculate statistics for use by the optimizer (no vacuum).
-
--analyze-in-stages
Only calculate statistics for use by the optimizer (no vacuum), like --analyze-only
. Run three stages of analyze; the first stage uses the lowest possible statistics target and subsequent stages build the full statistics.This option is only useful to analyze a database that currently has no statistics or has wholly incorrect ones, such as if it is newly populated from a restored dump or by pg_upgrade
. Be aware that running with this option in a database with existing statistics may cause the query optimizer choices to become transiently worse due to the low statistics targets of the early stages.
-
-?
--help
Show help about vacuumdb command line arguments, and exit.
vacuumdb also accepts the following command-line arguments for connection parameters:
-
-h `host
`--host=`host
`
Specifies the host name of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the Unix domain socket.
-
-p `port
`--port=`port
`
Specifies the TCP port or local Unix domain socket file extension on which the server is listening for connections.
-
-U `username
`--username=`username
`
User name to connect as.
-
-w
--no-password
Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a .pgpass
file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.
-
-W
--password
Force vacuumdb to prompt for a password before connecting to a database.This option is never essential, since vacuumdb will automatically prompt for a password if the server demands password authentication. However, vacuumdb will waste a connection attempt finding out that the server wants a password. In some cases it is worth typing -W
to avoid the extra connection attempt.
-
--maintenance-db=dbname
Specifies the name of the database to connect to to discover which databases should be vacuumed, when -a
/ --all
is used. If not specified, the postgres
database will be used, or if that does not exist, template1
will be used. This can be a connection string. If so, connection string parameters will override any conflicting command line options. Also, connection string parameters other than the database name itself will be re-used when connecting to other databases.
2.20.3. Environment
-
PGDATABASE
PGHOST
PGPORT
PGUSER
Default connection parameters
-
PG_COLOR
Specifies whether to use color in diagnostic messages. Possible values are always
, auto
and never
.
This utility, like most other IvorySQL utilities, also uses the environment variables supported by libpq.
2.20.5. Notes
vacuumdb might need to connect several times to the IvorySQL server, asking for a password each time. It is convenient to have a ~/.pgpass
file in such cases.
2.20.6. Examples
To clean the database test
:
$ vacuumdb test
To clean and analyze for the optimizer a database named bigdb
:
$ vacuumdb --analyze bigdb
To clean a single table foo
in a database named xyzzy
, and analyze a single column bar
of the table for the optimizer:
$ vacuumdb --analyze --verbose --table='foo(bar)' xyzzy
3. Server Applications
3.1. initdb
initdb — create a new IvorySQL database cluster
3.1.2. Options
-
-A authmethod
--auth=authmethod
This option specifies the default authentication method for local users used in pg_hba.conf
(host
and local
lines). Do not use trust
unless you trust all local users on your system. trust
is the default for ease of installation.
-
--auth-host=`authmethod
`
This option specifies the authentication method for local users via TCP/IP connections used in pg_hba.conf
(host
lines).
-
--auth-local=`authmethod
`
This option specifies the authentication method for local users via Unix-domain socket connections used in pg_hba.conf
(local
lines).
-
-D directory
--pgdata=directory
This option specifies the directory where the database cluster should be stored. This is the only information required by initdb
, but you can avoid writing it by setting the PGDATA
environment variable, which can be convenient since the database server (postgres
) can find the database directory later by the same variable.
-
-E `encoding
`--encoding=`encoding
`
Selects the encoding of the template databases. This will also be the default encoding of any database you create later, unless you override it then. The default is derived from the locale, if the libc locale provider is used, or UTF8
if the ICU locale provider is used.
-
-g
--allow-group-access
Allows users in the same group as the cluster owner to read all cluster files created by initdb
. This option is ignored on Windows as it does not support POSIX-style group permissions.
-
--icu-locale=locale
Specifies the ICU locale ID, if the ICU locale provider is used.
-
-k
--data-checksums
Use checksums on data pages to help detect corruption by the I/O system that would otherwise be silent. Enabling checksums may incur a noticeable performance penalty. If set, checksums are calculated for all objects, in all databases. All checksum failures will be reported in the pg_stat_database
view.
-
--locale=locale
Sets the default locale for the database cluster. If this option is not specified, the locale is inherited from the environment that initdb
runs in.
-
--lc-collate=locale
--lc-ctype=locale
--lc-messages=locale
--lc-monetary=locale
--lc-numeric=locale
--lc-time=locale
Like --locale
, but only sets the locale in the specified category.
-
--no-locale
Equivalent to --locale=C
.
-
--locale-provider={libc|icu}
This option sets the locale provider for databases created in the new cluster. It can be overridden in the CREATE DATABASE
command when new databases are subsequently created. The default is libc
.
-
-N
--no-sync
By default, initdb
will wait for all files to be written safely to disk. This option causes initdb
to return without waiting, which is faster, but means that a subsequent operating system crash can leave the data directory corrupt. Generally, this option is useful for testing, but should not be used when creating a production installation.
-
--no-instructions
By default, initdb
will write instructions for how to start the cluster at the end of its output. This option causes those instructions to be left out. This is primarily intended for use by tools that wrap initdb
in platform-specific behavior, where those instructions are likely to be incorrect.
-
--pwfile=filename
Makes initdb
read the database superuser’s password from a file. The first line of the file is taken as the password.
-
-S
--sync-only
Safely write all database files to disk and exit. This does not perform any of the normal initdb operations. Generally, this option is useful for ensuring reliable recovery after changing fsync from off
to on
.
-
-T config
--text-search-config=config
Sets the default text search configuration.
-
-U username
--username=username
Selects the user name of the database superuser. This defaults to the name of the effective user running initdb
. It is really not important what the superuser’s name is, but one might choose to keep the customary name postgres, even if the operating system user’s name is different.
-
-W
--pwprompt
Makes initdb
prompt for a password to give the database superuser. If you don’t plan on using password authentication, this is not important. Otherwise you won’t be able to use password authentication until you have a password set up.
-
-X directory
--waldir=directory
This option specifies the directory where the write-ahead log should be stored.
-
--wal-segsize=size
Set the WAL segment size, in megabytes. This is the size of each individual file in the WAL log. The default size is 16 megabytes. The value must be a power of 2 between 1 and 1024 (megabytes). This option can only be set during initialization, and cannot be changed later.It may be useful to adjust this size to control the granularity of WAL log shipping or archiving. Also, in databases with a high volume of WAL, the sheer number of WAL files per directory can become a performance and management problem. Increasing the WAL file size will reduce the number of WAL files.
Other, less commonly used, options are also available:
-
-d
--debug
Print debugging output from the bootstrap backend and a few other messages of lesser interest for the general public. The bootstrap backend is the program initdb
uses to create the catalog tables. This option generates a tremendous amount of extremely boring output.
-
--discard-caches
Run the bootstrap backend with the debug_discard_caches=1
option. This takes a very long time and is only of use for deep debugging.
-
-L directory
Specifies where initdb
should find its input files to initialize the database cluster. This is normally not necessary. You will be told if you need to specify their location explicitly.
-
-n
--no-clean
By default, when initdb
determines that an error prevented it from completely creating the database cluster, it removes any files it might have created before discovering that it cannot finish the job. This option inhibits tidying-up and is thus useful for debugging.
Other options:
-
-V
--version
Print the initdb version and exit.
-
-?
--help
Show help about initdb command line arguments, and exit.
3.1.3. Environment
-
PGDATA
Specifies the directory where the database cluster is to be stored; can be overridden using the -D
option.
-
PG_COLOR
Specifies whether to use color in diagnostic messages. Possible values are always
, auto
and never
.
-
TZ
Specifies the default time zone of the created database cluster. The value should be a full time zone name
This utility, like most other IvorySQL utilities, also uses the environment variables supported by libpq
3.2. pg_archivecleanup
pg_archivecleanup — clean up IvorySQL WAL archive files
3.2.1. Synopsis
pg_archivecleanup
[option
…] archivelocation
oldestkeptwalfile
To configure a standby server to use pg_archivecleanup, put this into its IvorySQL.conf
configuration file:
archive_cleanup_command = 'pg_archivecleanup archivelocation %r'
where archivelocation
is the directory from which WAL segment files should be removed.
When used within archive_cleanup_command, all WAL files logically preceding the value of the %r
argument will be removed from archivelocation
. This minimizes the number of files that need to be retained, while preserving crash-restart capability. Use of this parameter is appropriate if the archivelocation
is a transient staging area for this particular standby server, but not when the archivelocation
is intended as a long-term WAL archive area, or when multiple standby servers are recovering from the same archive location.
When used as a standalone program all WAL files logically preceding the oldestkeptwalfile
will be removed from archivelocation
. In this mode, if you specify a .partial
or .backup
file name, then only the file prefix will be used as the oldestkeptwalfile
. This treatment of .backup
file name allows you to remove all WAL files archived prior to a specific base backup without error. For example, the following example will remove all files older than WAL file name 000000010000003700000010
:
pg_archivecleanup -d archive 000000010000003700000010.00000020.backup
pg_archivecleanup: keep WAL file "archive/000000010000003700000010" and later
pg_archivecleanup: removing file "archive/00000001000000370000000F"
pg_archivecleanup: removing file "archive/00000001000000370000000E"
pg_archivecleanup assumes that archivelocation
is a directory readable and writable by the server-owning user.
3.2.2. Options
pg_archivecleanup accepts the following command-line arguments:
-
-d
Print lots of debug logging output on stderr
.
-
-n
Print the names of the files that would have been removed on stdout
(performs a dry run).
-
-V
--version
Print the pg_archivecleanup version and exit.
-
-x
extension
Provide an extension that will be stripped from all file names before deciding if they should be deleted. This is typically useful for cleaning up archives that have been compressed during storage, and therefore have had an extension added by the compression program. For example: -x .gz
.
-
-?
--help
Show help about pg_archivecleanup command line arguments, and exit.
3.2.3. Environment
The environment variable PG_COLOR
specifies whether to use color in diagnostic messages. Possible values are always
, auto
and never
.
3.2.4. Examples
On Linux or Unix systems, you might use:
archive_cleanup_command = 'pg_archivecleanup -d /mnt/standby/archive %r 2>>cleanup.log'
where the archive directory is physically located on the standby server, so that the archive_command
is accessing it across NFS, but the files are local to the standby. This will:
-
produce debugging output in
cleanup.log
-
remove no-longer-needed files from the archive directory
3.3. pg_checksums
pg_checksums — enable, disable or check data checksums in a IvorySQL database cluster
3.3.2. Options
The following command-line options are available:
-
-D directory
--pgdata=directory
Specifies the directory where the database cluster is stored.
-
-c
--check
Checks checksums. This is the default mode if nothing else is specified.
-
-d
--disable
Disables checksums.
-
-e
--enable
Enables checksums.
-
-f filenode
--filenode=filenode
Only validate checksums in the relation with filenode filenode
.
-
-N
--no-sync
By default, pg_checksums
will wait for all files to be written safely to disk. This option causes pg_checksums
to return without waiting, which is faster, but means that a subsequent operating system crash can leave the updated data directory corrupt. Generally, this option is useful for testing but should not be used on a production installation. This option has no effect when using --check
.
-
-P
--progress
Enable progress reporting. Turning this on will deliver a progress report while checking or enabling checksums.
-
-v
--verbose
Enable verbose output. Lists all checked files.
-
-V
--version
Print the pg_checksums version and exit.
-
-?
--help
Show help about pg_checksums command line arguments, and exit.
3.3.3. Environment
-
PGDATA
Specifies the directory where the database cluster is stored; can be overridden using the -D
option.
-
PG_COLOR
Specifies whether to use color in diagnostic messages. Possible values are always
, auto
and never
.
3.3.4. Notes
Enabling checksums in a large cluster can potentially take a long time. During this operation, the cluster or other programs that write to the data directory must not be started or else data loss may occur.
When using a replication setup with tools which perform direct copies of relation file blocks (for example pg_rewind), enabling or disabling checksums can lead to page corruptions in the shape of incorrect checksums if the operation is not done consistently across all nodes. When enabling or disabling checksums in a replication setup, it is thus recommended to stop all the clusters before switching them all consistently. Destroying all standbys, performing the operation on the primary and finally recreating the standbys from scratch is also safe.
If pg_checksums is aborted or killed while enabling or disabling checksums, the cluster’s data checksum configuration remains unchanged, and pg_checksums can be re-run to perform the same operation.
3.5. pg_ctl
pg_ctl — initialize, start, stop, or control a IvorySQL server
3.5.1. Synopsis
pg_ctl
init[db]
[-D
datadir
] [-s
] [-o
initdb-options
]
pg_ctl
start
[-D
datadir
] [-l
filename
] [-W
] [-t
seconds
] [-s
] [-o
options
] [-p
path
] [-c
]
pg_ctl
stop
[-D
datadir
] [-m
s[mart]
| f[ast]
| i[mmediate]
] [-W
] [-t
seconds
] [-s
]
pg_ctl
restart
[-D
datadir
] [-m
s[mart]
| f[ast]
| i[mmediate]
] [-W
] [-t
seconds
] [-s
] [-o
options
] [-c
]
pg_ctl
reload
[-D
datadir
] [-s
]
pg_ctl
status
[-D
datadir
]
pg_ctl
promote
[-D
datadir
] [-W
] [-t
seconds
] [-s
]
pg_ctl
logrotate
[-D
datadir
] [-s
]
pg_ctl
kill
signal_name
process_id
On Microsoft Windows, also:
pg_ctl
register
[-D
datadir
] [-N
servicename
] [-U
username
] [-P
password
] [-S
a[uto]
| d[emand]
] [-e
source
] [-W
] [-t
seconds
] [-s
] [-o
options
]
pg_ctl
unregister
[-N
servicename
]
3.5.2. Options
-
-c
--core-files
Attempt to allow server crashes to produce core files, on platforms where this is possible, by lifting any soft resource limit placed on core files. This is useful in debugging or diagnosing problems by allowing a stack trace to be obtained from a failed server process.
-
-D datadir
--pgdata=datadir
Specifies the file system location of the database configuration files. If this option is omitted, the environment variable PGDATA
is used.
-
-l filename
--log=filename
Append the server log output to filename
. If the file does not exist, it is created. The umask is set to 077, so access to the log file is disallowed to other users by default.
-
-m mode
--mode=mode
Specifies the shutdown mode. mode
can be smart
, fast
, or immediate
, or the first letter of one of these three. If this option is omitted, fast
is the default.
-
-o options
--options=options
Specifies options to be passed directly to the postgres
command. -o
can be specified multiple times, with all the given options being passed through.The options
should usually be surrounded by single or double quotes to ensure that they are passed through as a group.
-
-o initdb-options
--options=initdb-options
Specifies options to be passed directly to the initdb
command. -o
can be specified multiple times, with all the given options being passed through.The initdb-options
should usually be surrounded by single or double quotes to ensure that they are passed through as a group.
-
-p path
Specifies the location of the postgres
executable. By default the postgres
executable is taken from the same directory as pg_ctl
, or failing that, the hard-wired installation directory. It is not necessary to use this option unless you are doing something unusual and get errors that the postgres
executable was not found.In init
mode, this option analogously specifies the location of the initdb
executable.
-
-s
--silent
Print only errors, no informational messages.
-
-t seconds
--timeout=seconds
Specifies the maximum number of seconds to wait when waiting for an operation to complete (see option -w
). Defaults to the value of the PGCTLTIMEOUT
environment variable or, if not set, to 60 seconds.
-
-V
--version
Print the pg_ctl version and exit.
-
-w
--wait
Wait for the operation to complete. This is supported for the modes start
, stop
, restart
, promote
, and register
, and is the default for those modes.When waiting, pg_ctl
repeatedly checks the server’s PID file, sleeping for a short amount of time between checks. Startup is considered complete when the PID file indicates that the server is ready to accept connections. Shutdown is considered complete when the server removes the PID file. pg_ctl
returns an exit code based on the success of the startup or shutdown.If the operation does not complete within the timeout (see option -t
), then pg_ctl
exits with a nonzero exit status. But note that the operation might continue in the background and eventually succeed.
-
-W
--no-wait
Do not wait for the operation to complete. This is the opposite of the option -w
.If waiting is disabled, the requested action is triggered, but there is no feedback about its success. In that case, the server log file or an external monitoring system would have to be used to check the progress and success of the operation.In prior releases of IvorySQL, this was the default except for the stop
mode.
-
-?
--help
Show help about pg_ctl command line arguments, and exit.
If an option is specified that is valid, but not relevant to the selected operating mode, pg_ctl ignores it.
3.5.3. Options for Windows
-
-e source
Name of the event source for pg_ctl to use for logging to the event log when running as a Windows service. The default is IvorySQL
. Note that this only controls messages sent from pg_ctl itself; once started, the server will use the event source specified by its event_source parameter. Should the server fail very early in startup, before that parameter has been set, it might also log using the default event source name IvorySQL
.
-
-N `servicename
`
Name of the system service to register. This name will be used as both the service name and the display name. The default is IvorySQL
.
-
-P `password
`
Password for the user to run the service as.
-
-S start-type
Start type of the system service. start-type
can be auto
, or demand
, or the first letter of one of these two. If this option is omitted, auto
is the default.
-
-U username
User name for the user to run the service as. For domain users, use the format DOMAIN\username
.
3.5.4. Environment
-
PGCTLTIMEOUT
Default limit on the number of seconds to wait when waiting for startup or shutdown to complete. If not set, the default is 60 seconds.
-
PGDATA
Default data directory location.
Most pg_ctl
modes require knowing the data directory location; therefore, the -D
option is required unless PGDATA
is set.
pg_ctl
, like most other IvorySQL utilities, also uses the environment variables supported by libpq
3.5.5. Files
-
postmaster.pid
pg_ctl examines this file in the data directory to determine whether the server is currently running.
-
postmaster.opts
If this file exists in the data directory, pg_ctl (in restart
mode) will pass the contents of the file as options to postgres, unless overridden by the -o
option. The contents of this file are also displayed in status
mode.
3.5.6. Examples
3.5.6.1. Starting the Server
To start the server, waiting until the server is accepting connections:
$ pg_ctl start
To start the server using port 5433, and running without fsync
, use:
$ pg_ctl -o "-F -p 5433" start
3.5.6.2. Stopping the Server
To stop the server, use:
$ pg_ctl stop
The -m
option allows control over how the server shuts down:
$ pg_ctl stop -m smart
3.5.6.3. Restarting the Server
Restarting the server is almost equivalent to stopping the server and starting it again, except that by default, pg_ctl
saves and reuses the command line options that were passed to the previously-running instance. To restart the server using the same options as before, use:
$ pg_ctl restart
But if -o
is specified, that replaces any previous options. To restart using port 5433, disabling fsync
upon restart:
$ pg_ctl -o "-F -p 5433" restart
3.6. pg_resetwal
pg_resetwal — reset the write-ahead log and other control information of a IvorySQL database cluster
3.6.2. Options
-
-f
--force
Force pg_resetwal
to proceed even if it cannot determine valid data for pg_control
, as explained above.
-
-n
--dry-run
The -n
/ --dry-run
option instructs pg_resetwal
to print the values reconstructed from pg_control
and values about to be changed, and then exit without modifying anything. This is mainly a debugging tool, but can be useful as a sanity check before allowing pg_resetwal
to proceed for real.
-
-V
--version
Display version information, then exit.
-
-?
--help
Show help, then exit.
The following options are only needed when pg_resetwal
is unable to determine appropriate values by reading pg_control
. Safe values can be determined as described below. For values that take numeric arguments, hexadecimal values can be specified by using the prefix 0x
.
-
-c xid,xid
--commit-timestamp-ids=xid,xid
Manually set the oldest and newest transaction IDs for which the commit time can be retrieved.A safe value for the oldest transaction ID for which the commit time can be retrieved (first part) can be determined by looking for the numerically smallest file name in the directory pg_commit_ts
under the data directory. Conversely, a safe value for the newest transaction ID for which the commit time can be retrieved (second part) can be determined by looking for the numerically greatest file name in the same directory. The file names are in hexadecimal.
-
-e xid_epoch
--epoch=xid_epoch
Manually set the next transaction ID’s epoch.The transaction ID epoch is not actually stored anywhere in the database except in the field that is set by pg_resetwal
, so any value will work so far as the database itself is concerned. You might need to adjust this value to ensure that replication systems such as Slony-I and Skytools work correctly — if so, an appropriate value should be obtainable from the state of the downstream replicated database.
-
-l walfile
--next-wal-file=walfile
Manually set the WAL starting location by specifying the name of the next WAL segment file.The name of next WAL segment file should be larger than any WAL segment file name currently existing in the directory pg_wal
under the data directory. These names are also in hexadecimal and have three parts. The first part is the “timeline ID” and should usually be kept the same. For example, if 00000001000000320000004A
is the largest entry in pg_wal
, use -l 00000001000000320000004B
or higher.Note that when using nondefault WAL segment sizes, the numbers in the WAL file names are different from the LSNs that are reported by system functions and system views. This option takes a WAL file name, not an LSN.Note`pg_resetwal` itself looks at the files in pg_wal
and chooses a default -l
setting beyond the last existing file name. Therefore, manual adjustment of -l
should only be needed if you are aware of WAL segment files that are not currently present in pg_wal
, such as entries in an offline archive; or if the contents of pg_wal
have been lost entirely.
-
-m mxid,mxid
--multixact-ids=mxid,mxid
Manually set the next and oldest multitransaction ID.A safe value for the next multitransaction ID (first part) can be determined by looking for the numerically largest file name in the directory pg_multixact/offsets
under the data directory, adding one, and then multiplying by 65536 (0x10000). Conversely, a safe value for the oldest multitransaction ID (second part of -m
) can be determined by looking for the numerically smallest file name in the same directory and multiplying by 65536. The file names are in hexadecimal, so the easiest way to do this is to specify the option value in hexadecimal and append four zeroes.
-
-o oid
--next-oid=oid
Manually set the next OID.There is no comparably easy way to determine a next OID that’s beyond the largest one in the database, but fortunately it is not critical to get the next-OID setting right.
-
-O mxoff
--multixact-offset=mxoff
Manually set the next multitransaction offset.A safe value can be determined by looking for the numerically largest file name in the directory pg_multixact/members
under the data directory, adding one, and then multiplying by 52352 (0xCC80). The file names are in hexadecimal. There is no simple recipe such as the ones for other options of appending zeroes.
-
--wal-segsize=wal_segment_size
Set the new WAL segment size, in megabytes. The value must be set to a power of 2 between 1 and 1024 (megabytes). See the same option of initdb for more information.NoteWhile pg_resetwal
will set the WAL starting address beyond the latest existing WAL segment file, some segment size changes can cause previous WAL file names to be reused. It is recommended to use -l
together with this option to manually set the WAL starting address if WAL file name overlap will cause problems with your archiving strategy.
-
-u xid
--oldest-transaction-id=xid
Manually set the oldest unfrozen transaction ID.A safe value can be determined by looking for the numerically smallest file name in the directory pg_xact
under the data directory and then multiplying by 1048576 (0x100000). Note that the file names are in hexadecimal. It is usually easiest to specify the option value in hexadecimal too. For example, if 0007
is the smallest entry in pg_xact
, -u 0x700000
will work (five trailing zeroes provide the proper multiplier).
-
-x xid
--next-transaction-id=xid
Manually set the next transaction ID.A safe value can be determined by looking for the numerically largest file name in the directory pg_xact
under the data directory, adding one, and then multiplying by 1048576 (0x100000). Note that the file names are in hexadecimal. It is usually easiest to specify the option value in hexadecimal too. For example, if 0011
is the largest entry in pg_xact
, -x 0x1200000
will work (five trailing zeroes provide the proper multiplier).
3.6.3. Environment
-
PG_COLOR
Specifies whether to use color in diagnostic messages. Possible values are always
, auto
and never
.
3.6.4. Notes
This command must not be used when the server is running. pg_resetwal
will refuse to start up if it finds a server lock file in the data directory. If the server crashed then a lock file might have been left behind; in that case you can remove the lock file to allow pg_resetwal
to run. But before you do so, make doubly certain that there is no server process still alive.
pg_resetwal
works only with servers of the same major version.
3.7. pg_rewind
pg_rewind — synchronize a IvorySQL data directory with another data directory that was forked from it
3.7.1. Synopsis
pg_rewind
[option
…] { -D
| --target-pgdata
} directory
{ --source-pgdata=`directory
` | --source-server=`connstr
` }
3.7.2. Warning
If pg_rewind fails while processing, then the data folder of the target is likely not in a state that can be recovered. In such a case, taking a new fresh backup is recommended.
As pg_rewind copies configuration files entirely from the source, it may be required to correct the configuration used for recovery before restarting the target server, especially if the target is reintroduced as a standby of the source. If you restart the server after the rewind operation has finished but without configuring recovery, the target may again diverge from the primary.
pg_rewind will fail immediately if it finds files it cannot write directly to. This can happen for example when the source and the target server use the same file mapping for read-only SSL keys and certificates. If such files are present on the target server it is recommended to remove them before running pg_rewind. After doing the rewind, some of those files may have been copied from the source, in which case it may be necessary to remove the data copied and restore back the set of links used before the rewind.
3.7.3. Options
pg_rewind accepts the following command-line arguments:
-
-D directory
--target-pgdata=directory
This option specifies the target data directory that is synchronized with the source. The target server must be shut down cleanly before running pg_rewind
-
--source-pgdata=directory
Specifies the file system path to the data directory of the source server to synchronize the target with. This option requires the source server to be cleanly shut down.
-
--source-server=connstr
Specifies a libpq connection string to connect to the source IvorySQL server to synchronize the target with. The connection must be a normal (non-replication) connection with a role having sufficient permissions to execute the functions used by pg_rewind on the source server (see Notes section for details) or a superuser role. This option requires the source server to be running and accepting connections.
-
-R
--write-recovery-conf
Create standby.signal
and append connection settings to IvorySQL.auto.conf
in the output directory. --source-server
is mandatory with this option.
-
-n
--dry-run
Do everything except actually modifying the target directory.
-
-N
--no-sync
By default, pg_rewind
will wait for all files to be written safely to disk. This option causes pg_rewind
to return without waiting, which is faster, but means that a subsequent operating system crash can leave the data directory corrupt. Generally, this option is useful for testing but should not be used on a production installation.
-
-P
--progress
Enables progress reporting. Turning this on will deliver an approximate progress report while copying data from the source cluster.
-
-c
--restore-target-wal
Use restore_command
defined in the target cluster configuration to retrieve WAL files from the WAL archive if these files are no longer available in the pg_wal
directory.
-
--config-file=filename
Use the specified main server configuration file for the target cluster. This affects pg_rewind when it uses internally the postgres command for the rewind operation on this cluster (when retrieving restore_command
with the option -c/--restore-target-wal
and when forcing a completion of crash recovery).
-
--debug
Print verbose debugging output that is mostly useful for developers debugging pg_rewind.
-
--no-ensure-shutdown
pg_rewind requires that the target server is cleanly shut down before rewinding. By default, if the target server is not shut down cleanly, pg_rewind starts the target server in single-user mode to complete crash recovery first, and stops it. By passing this option, pg_rewind skips this and errors out immediately if the server is not cleanly shut down. Users are expected to handle the situation themselves in that case.
-
-V
--version
Display version information, then exit.
-
-?
--help
Show help, then exit.
3.7.4. Environment
When --source-server
option is used, pg_rewind also uses the environment variables supported by libpq .
The environment variable PG_COLOR
specifies whether to use color in diagnostic messages. Possible values are always
, auto
and never
.
3.7.5. Notes
When executing pg_rewind using an online cluster as source, a role having sufficient permissions to execute the functions used by pg_rewind on the source cluster can be used instead of a superuser. Here is how to create such a role, named rewind_user
here:
CREATE USER rewind_user LOGIN;
GRANT EXECUTE ON function pg_catalog.pg_ls_dir(text, boolean, boolean) TO rewind_user;
GRANT EXECUTE ON function pg_catalog.pg_stat_file(text, boolean) TO rewind_user;
GRANT EXECUTE ON function pg_catalog.pg_read_binary_file(text) TO rewind_user;
GRANT EXECUTE ON function pg_catalog.pg_read_binary_file(text, bigint, bigint, boolean) TO rewind_user;
When executing pg_rewind using an online cluster as source which has been recently promoted, it is necessary to execute a CHECKPOINT
after promotion such that its control file reflects up-to-date timeline information, which is used by pg_rewind to check if the target cluster can be rewound using the designated source cluster.
3.7.6. How It Works
The basic idea is to copy all file system-level changes from the source cluster to the target cluster:
1.Scan the WAL log of the target cluster, starting from the last checkpoint before the point where the source cluster’s timeline history forked off from the target cluster. For each WAL record, record each data block that was touched. This yields a list of all the data blocks that were changed in the target cluster, after the source cluster forked off. If some of the WAL files are no longer available, try re-running pg_rewind with the -c
option to search for the missing files in the WAL archive.
2.Copy all those changed blocks from the source cluster to the target cluster, either using direct file system access (--source-pgdata
) or SQL (--source-server
). Relation files are now in a state equivalent to the moment of the last completed checkpoint prior to the point at which the WAL timelines of the source and target diverged plus the current state on the source of any blocks changed on the target after that divergence.
3.Copy all other files, including new relation files, WAL segments, pg_xact
, and configuration files from the source cluster to the target cluster. Similarly to base backups, the contents of the directories pg_dynshmem/
, pg_notify/
, pg_replslot/
, pg_serial/
, pg_snapshots/
, pg_stat_tmp/
, and pg_subtrans/
are omitted from the data copied from the source cluster. The files backup_label
, tablespace_map
, pg_internal.init
, postmaster.opts
, and postmaster.pid
, as well as any file or directory beginning with pgsql_tmp
, are omitted.
4.Create a backup_label
file to begin WAL replay at the checkpoint created at failover and configure the pg_control
file with a minimum consistency LSN defined as the result of pg_current_wal_insert_lsn()
when rewinding from a live source or the last checkpoint LSN when rewinding from a stopped source.
5.When starting the target, IvorySQL replays all the required WAL, resulting in a data directory in a consistent state.
3.8. pg_test_fsync
pg_test_fsync — determine fastest wal_sync_method
for IvorySQL
3.8.2. Options
pg_test_fsync accepts the following command-line options:
-
-f
--filename
Specifies the file name to write test data in. This file should be in the same file system that the pg_wal
directory is or will be placed in. (pg_wal
contains the WAL files.) The default is pg_test_fsync.out
in the current directory.
-
-s
--secs-per-test
Specifies the number of seconds for each test. The more time per test, the greater the test’s accuracy, but the longer it takes to run. The default is 5 seconds, which allows the program to complete in under 2 minutes.
-
-V
--version
Print the pg_test_fsync version and exit.
-
-?
--help
Show help about pg_test_fsync command line arguments, and exit.
3.9. pg_test_timing
pg_test_timing — measure timing overhead
3.9.2. Options
pg_test_timing accepts the following command-line options:
-
-d `duration
`--duration=`duration
`
Specifies the test duration, in seconds. Longer durations give slightly better accuracy, and are more likely to discover problems with the system clock moving backwards. The default test duration is 3 seconds.
-
-V
--version
Print the pg_test_timing version and exit.
-
-?
--help
Show help about pg_test_timing command line arguments, and exit.
3.9.3. Usage
3.9.3.1. Interpreting Results
Good results will show most (>90%) individual timing calls take less than one microsecond. Average per loop overhead will be even lower, below 100 nanoseconds. This example from an Intel i7-860 system using a TSC clock source shows excellent performance:
Testing timing overhead for 3 seconds.
Per loop time including overhead: 35.96 ns
Histogram of timing durations:
< us % of total count
1 96.40465 80435604
2 3.59518 2999652
4 0.00015 126
8 0.00002 13
16 0.00000 2
Note that different units are used for the per loop time than the histogram. The loop can have resolution within a few nanoseconds (ns), while the individual timing calls can only resolve down to one microsecond (us).
3.9.3.2. Measuring Executor Timing Overhead
When the query executor is running a statement using EXPLAIN ANALYZE
, individual operations are timed as well as showing a summary. The overhead of your system can be checked by counting rows with the psql program:
CREATE TABLE t AS SELECT * FROM generate_series(1,100000);
\timing
SELECT COUNT(*) FROM t;
EXPLAIN ANALYZE SELECT COUNT(*) FROM t;
The i7-860 system measured runs the count query in 9.8 ms while the EXPLAIN ANALYZE
version takes 16.6 ms, each processing just over 100,000 rows. That 6.8 ms difference means the timing overhead per row is 68 ns, about twice what pg_test_timing estimated it would be. Even that relatively small amount of overhead is making the fully timed count statement take almost 70% longer. On more substantial queries, the timing overhead would be less problematic.
3.9.3.3. Changing Time Sources
On some newer Linux systems, it’s possible to change the clock source used to collect timing data at any time. A second example shows the slowdown possible from switching to the slower acpi_pm time source, on the same system used for the fast results above:
# cat /sys/devices/system/clocksource/clocksource0/available_clocksource
tsc hpet acpi_pm
# echo acpi_pm > /sys/devices/system/clocksource/clocksource0/current_clocksource
# pg_test_timing
Per loop time including overhead: 722.92 ns
Histogram of timing durations:
< us % of total count
1 27.84870 1155682
2 72.05956 2990371
4 0.07810 3241
8 0.01357 563
16 0.00007 3
In this configuration, the sample EXPLAIN ANALYZE
above takes 115.9 ms. That’s 1061 ns of timing overhead, again a small multiple of what’s measured directly by this utility. That much timing overhead means the actual query itself is only taking a tiny fraction of the accounted for time, most of it is being consumed in overhead instead. In this configuration, any EXPLAIN ANALYZE
totals involving many timed operations would be inflated significantly by timing overhead.
FreeBSD also allows changing the time source on the fly, and it logs information about the timer selected during boot:
# dmesg | grep "Timecounter"
Timecounter "ACPI-fast" frequency 3579545 Hz quality 900
Timecounter "i8254" frequency 1193182 Hz quality 0
Timecounters tick every 10.000 msec
Timecounter "TSC" frequency 2531787134 Hz quality 800
# sysctl kern.timecounter.hardware=TSC
kern.timecounter.hardware: ACPI-fast -> TSC
Other systems may only allow setting the time source on boot. On older Linux systems the "clock" kernel setting is the only way to make this sort of change. And even on some more recent ones, the only option you’ll see for a clock source is "jiffies". Jiffies are the older Linux software clock implementation, which can have good resolution when it’s backed by fast enough timing hardware, as in this example:
$ cat /sys/devices/system/clocksource/clocksource0/available_clocksource
jiffies
$ dmesg | grep time.c
time.c: Using 3.579545 MHz WALL PM GTOD PIT/TSC timer.
time.c: Detected 2400.153 MHz processor.
$ pg_test_timing
Testing timing overhead for 3 seconds.
Per timing duration including loop overhead: 97.75 ns
Histogram of timing durations:
< us % of total count
1 90.23734 27694571
2 9.75277 2993204
4 0.00981 3010
8 0.00007 22
16 0.00000 1
32 0.00000 1
3.9.3.4. Clock Hardware and Timing Accuracy
Collecting accurate timing information is normally done on computers using hardware clocks with various levels of accuracy. With some hardware the operating systems can pass the system clock time almost directly to programs. A system clock can also be derived from a chip that simply provides timing interrupts, periodic ticks at some known time interval. In either case, operating system kernels provide a clock source that hides these details. But the accuracy of that clock source and how quickly it can return results varies based on the underlying hardware.
Inaccurate time keeping can result in system instability. Test any change to the clock source very carefully. Operating system defaults are sometimes made to favor reliability over best accuracy. And if you are using a virtual machine, look into the recommended time sources compatible with it. Virtual hardware faces additional difficulties when emulating timers, and there are often per operating system settings suggested by vendors.
The Time Stamp Counter (TSC) clock source is the most accurate one available on current generation CPUs. It’s the preferred way to track the system time when it’s supported by the operating system and the TSC clock is reliable. There are several ways that TSC can fail to provide an accurate timing source, making it unreliable. Older systems can have a TSC clock that varies based on the CPU temperature, making it unusable for timing. Trying to use TSC on some older multicore CPUs can give a reported time that’s inconsistent among multiple cores. This can result in the time going backwards, a problem this program checks for. And even the newest systems can fail to provide accurate TSC timing with very aggressive power saving configurations.
Newer operating systems may check for the known TSC problems and switch to a slower, more stable clock source when they are seen. If your system supports TSC time but doesn’t default to that, it may be disabled for a good reason. And some operating systems may not detect all the possible problems correctly, or will allow using TSC even in situations where it’s known to be inaccurate.
The High Precision Event Timer (HPET) is the preferred timer on systems where it’s available and TSC is not accurate. The timer chip itself is programmable to allow up to 100 nanosecond resolution, but you may not see that much accuracy in your system clock.
Advanced Configuration and Power Interface (ACPI) provides a Power Management (PM) Timer, which Linux refers to as the acpi_pm. The clock derived from acpi_pm will at best provide 300 nanosecond resolution.
Timers used on older PC hardware include the 8254 Programmable Interval Timer (PIT), the real-time clock (RTC), the Advanced Programmable Interrupt Controller (APIC) timer, and the Cyclone timer. These timers aim for millisecond resolution.
3.10. pg_upgrade
pg_upgrade — upgrade a IvorySQL server instance
3.10.2. Options
pg_upgrade accepts the following command-line arguments:
-
-b
bindir
--old-bindir=
bindir
the old IvorySQL executable directory; environment variable PGBINOLD
-
-B
bindir
--new-bindir=
bindir
the new IvorySQL executable directory; default is the directory where pg_upgrade resides; environment variable PGBINNEW
-
-c
--check
check clusters only, don’t change any data
-
-d
configdir
--old-datadir=
configdir
the old database cluster configuration directory; environment variable PGDATAOLD
-
-D
configdir
--new-datadir=
configdir
the new database cluster configuration directory; environment variable PGDATANEW
-
-j `njobs
`--jobs=`njobs
`
number of simultaneous processes or threads to use
-
-k
--link
use hard links instead of copying files to the new cluster
-
-N
--no-sync
By default, pg_upgrade
will wait for all files of the upgraded cluster to be written safely to disk. This option causes pg_upgrade
to return without waiting, which is faster, but means that a subsequent operating system crash can leave the data directory corrupt. Generally, this option is useful for testing but should not be used on a production installation.
-
-o
options
--old-options
options
options to be passed directly to the old postgres
command; multiple option invocations are appended
-
-O
options
--new-options
options
options to be passed directly to the new postgres
command; multiple option invocations are appended
-
-p
port
--old-port=
port
the old cluster port number; environment variable PGPORTOLD
-
-P
port
--new-port=
port
the new cluster port number; environment variable PGPORTNEW
-
-r
--retain
retain SQL and log files even after successful completion
-
-s
dir
--socketdir=
dir
directory to use for postmaster sockets during upgrade; default is current working directory; environment variable PGSOCKETDIR
-
-U
username
--username=
username
cluster’s install user name; environment variable PGUSER
-
-v
--verbose
enable verbose internal logging
-
-V
--version
display version information, then exit
-
--clone
Use efficient file cloning (also known as “reflinks” on some systems) instead of copying files to the new cluster. This can result in near-instantaneous copying of the data files, giving the speed advantages of -k
/--link
while leaving the old cluster untouched.File cloning is only supported on some operating systems and file systems. If it is selected but not supported, the pg_upgrade run will error. At present, it is supported on Linux (kernel 4.5 or later) with Btrfs and XFS (on file systems created with reflink support), and on macOS with APFS.
-
-?
--help
show help, then exit
3.10.3. Usage
These are the steps to perform an upgrade with pg_upgrade:
1.Optionally move the old cluster
2.For source installs, build the new version
3.Install the new IvorySQL binaries
4.Initialize the new IvorySQL cluster
5.Install extension shared object files
6.Copy custom full-text search files
7.Adjust authentication
8.Stop both servers
9.Prepare for standby server upgrades
10.Run pg_upgrade
11.Upgrade streaming replication and log-shipping standby servers
12.Restore pg_hba.conf
13.Start the new server
14.Post-upgrade processing
15.Statistics
Because optimizer statistics are not transferred by pg_upgrade
, you will be instructed to run a command to regenerate that information at the end of the upgrade. You might need to set connection parameters to match your new cluster.
16.Delete old cluster
Once you are satisfied with the upgrade, you can delete the old cluster’s data directories by running the script mentioned when pg_upgrade
completes. (Automatic deletion is not possible if you have user-defined tablespaces inside the old data directory.) You can also delete the old installation directories (e.g., bin
, share
).
17.Reverting to old cluster
If, after running pg_upgrade
, you wish to revert to the old cluster, there are several options:
-
If the
--check
option was used, the old cluster was unmodified; it can be restarted. -
If the
--link
option was not used, the old cluster was unmodified; it can be restarted. -
If the
--link
option was used, the data files might be shared between the old and new cluster:-
If
pg_upgrade
aborted before linking started, the old cluster was unmodified; it can be restarted. -
If you did not start the new cluster, the old cluster was unmodified except that, when linking started, a
.old
suffix was appended to$PGDATA/global/pg_control
. To reuse the old cluster, remove the.old
suffix from$PGDATA/global/pg_control
; you can then restart the old cluster. -
If you did start the new cluster, it has written to shared files and it is unsafe to use the old cluster. The old cluster will need to be restored from backup in this case.
-
3.10.4. Notes
pg_upgrade creates various working files, such as schema dumps, stored within pg_upgrade_output.d
in the directory of the new cluster. Each run creates a new subdirectory named with a timestamp formatted as per ISO 8601 (%Y%m%dT%H%M%S
), where all its generated files are stored. pg_upgrade_output.d
and its contained files will be removed automatically if pg_upgrade completes successfully; but in the event of trouble, the files there may provide useful debugging information.
pg_upgrade launches short-lived postmasters in the old and new data directories. Temporary Unix socket files for communication with these postmasters are, by default, made in the current working directory. In some situations the path name for the current directory might be too long to be a valid socket name. In that case you can use the -s
option to put the socket files in some directory with a shorter path name. For security, be sure that that directory is not readable or writable by any other users. (This is not supported on Windows.)
All failure, rebuild, and reindex cases will be reported by pg_upgrade if they affect your installation; post-upgrade scripts to rebuild tables and indexes will be generated automatically. If you are trying to automate the upgrade of many clusters, you should find that clusters with identical database schemas require the same post-upgrade steps for all cluster upgrades; this is because the post-upgrade steps are based on the database schemas, and not user data.
For deployment testing, create a schema-only copy of the old cluster, insert dummy data, and upgrade that.
pg_upgrade does not support upgrading of databases containing table columns using these reg*
OID-referencing system data types:
|
|
|
|
|
|
|
|
(regclass
, regrole
, and regtype
can be upgraded.)
If you want to use link mode and you do not want your old cluster to be modified when the new cluster is started, consider using the clone mode. If that is not available, make a copy of the old cluster and upgrade that in link mode. To make a valid copy of the old cluster, use rsync
to create a dirty copy of the old cluster while the server is running, then shut down the old server and run rsync --checksum
again to update the copy with any changes to make it consistent. (--checksum
is necessary because rsync
only has file modification-time granularity of one second.) . If your file system supports file system snapshots or copy-on-write file copies, you can use that to make a backup of the old cluster and tablespaces, though the snapshot and copies must be created simultaneously or while the database server is down.
3.11. pg_waldump
pg_waldump — display a human-readable rendering of the write-ahead log of a IvorySQL database cluster
3.11.2. Options
The following command-line options control the location and format of the output:
-
startseg
Start reading at the specified log segment file. This implicitly determines the path in which files will be searched for, and the timeline to use.
-
endseg
Stop after reading the specified log segment file.
-
-b
--bkp-details
Output detailed information about backup blocks.
-
-B block
--block=block
Only display records that modify the given block. The relation must also be provided with --relation
or -R
.
-
-e end
--end=end
Stop reading at the specified WAL location, instead of reading to the end of the log stream.
-
-f
--follow
After reaching the end of valid WAL, keep polling once per second for new WAL to appear.
-
-F fork
--fork=fork
If provided, only display records that modify blocks in the given fork. The valid values are main
for the main fork, fsm
for the free space map, vm
for the visibility map, and init
for the init fork.
-
-n limit
--limit=limit
Display the specified number of records, then stop.
-
-p path
--path=path
Specifies a directory to search for log segment files or a directory with a pg_wal
subdirectory that contains such files. The default is to search in the current directory, the pg_wal
subdirectory of the current directory, and the pg_wal
subdirectory of PGDATA
.
-
-q
--quiet
Do not print any output, except for errors. This option can be useful when you want to know whether a range of WAL records can be successfully parsed but don’t care about the record contents.
-
-r rmgr
--rmgr=rmgr
Only display records generated by the specified resource manager. You can specify the option multiple times to select multiple resource managers. If list
is passed as name, print a list of valid resource manager names, and exit.Extensions may define custom resource managers, but pg_waldump does not load the extension module and therefore does not recognize custom resource managers by name. Instead, you can specify the custom resource managers as custom
where “” is the three-digit resource manager ID. Names of this form will always be considered valid.
-
-R tblspc / db / rel
--relation=tblspc / db / rel
Only display records that modify blocks in the given relation. The relation is specified with tablespace OID, database OID, and relfilenode separated by slashes, for example 1234/12345/12345
. This is the same format used for relations in the program’s output.
-
-s start
--start=start
WAL location at which to start reading. The default is to start reading the first valid log record found in the earliest file found.
-
-t timeline
--timeline=timeline
Timeline from which to read log records. The default is to use the value in startseg
, if that is specified; otherwise, the default is 1.
-
-V
--version
Print the pg_waldump version and exit.
-
-w
--fullpage
Only display records that include full page images.
-
-x xid
--xid=xid
Only display records marked with the given transaction ID.
-
-z
--stats[=record]
Display summary statistics (number and size of records and full-page images) instead of individual records. Optionally generate statistics per-record instead of per-rmgr.If pg_waldump is terminated by signal SIGINT (Control + C), the summary of the statistics computed is displayed up to the termination point. This operation is not supported on Windows.
-
-?
--help
Show help about pg_waldump command line arguments, and exit.
3.11.3. Environment
-
PGDATA
Data directory; see also the -p
option.
-
PG_COLOR
Specifies whether to use color in diagnostic messages. Possible values are always
, auto
and never
.
3.11.4. Notes
Can give wrong results when the server is running.
Only the specified timeline is displayed (or the default, if none is specified). Records in other timelines are ignored.
pg_waldump cannot read WAL files with suffix .partial
. If those files need to be read, .partial
suffix needs to be removed from the file name.
3.12. postgres
postgres — IvorySQL database server
3.12.2. Options
postgres
accepts the following command-line arguments. You can save typing most of these options by setting up a configuration file. Some (safe) options can also be set from the connecting client in an application-dependent way to apply only for that session. For example, if the environment variable PGOPTIONS
is set, then libpq-based clients will pass that string to the server, which will interpret it as postgres
command-line options.
3.12.3. General Purpose
-
-B nbuffers
Sets the number of shared buffers for use by the server processes. The default value of this parameter is chosen automatically by initdb. Specifying this option is equivalent to setting the shared_buffers configuration parameter.
-
-c name=value
Sets a named run-time parameter. Most of the other command line options are in fact short forms of such a parameter assignment. -c
can appear multiple times to set multiple parameters.
-
-C name
Prints the value of the named run-time parameter, and exits. (See the -c
option above for details.) This returns values from IvorySQL.conf
, modified by any parameters supplied in this invocation. It does not reflect parameters supplied when the cluster was started.This can be used on a running server for most parameters. However, the server must be shut down for some runtime-computed parameters (e.g., shared_memory_size, shared_memory_size_in_huge_pages, and wal_segment_size).This option is meant for other programs that interact with a server instance, such as pg_ctl, to query configuration parameter values. User-facing applications should instead use SHOW
or the pg_settings
view.
-
-d debug-level
Sets the debug level. The higher this value is set, the more debugging output is written to the server log. Values are from 1 to 5. It is also possible to pass -d 0
for a specific session, which will prevent the server log level of the parent postgres
process from being propagated to this session.
-
-D datadir
Specifies the file system location of the database configuration files.
-
-e
Sets the default date style to “European”, that is DMY
ordering of input date fields. This also causes the day to be printed before the month in certain date output formats.
-
-F
Disables fsync
calls for improved performance, at the risk of data corruption in the event of a system crash. Specifying this option is equivalent to disabling the fsync configuration parameter. Read the detailed documentation before using this!
-
-h hostname
Specifies the IP host name or address on which postgres
is to listen for TCP/IP connections from client applications. The value can also be a comma-separated list of addresses, or *
to specify listening on all available interfaces. An empty value specifies not listening on any IP addresses, in which case only Unix-domain sockets can be used to connect to the server. Defaults to listening only on localhost. Specifying this option is equivalent to setting the listen_addresses configuration parameter.
-
-i
Allows remote clients to connect via TCP/IP (Internet domain) connections. Without this option, only local connections are accepted. This option is equivalent to setting listen_addresses
to *
in IvorySQL.conf
or via -h
.This option is deprecated since it does not allow access to the full functionality of listen_addresses. It’s usually better to set listen_addresses
directly.
-
-k directory
Specifies the directory of the Unix-domain socket on which postgres
is to listen for connections from client applications. The value can also be a comma-separated list of directories. An empty value specifies not listening on any Unix-domain sockets, in which case only TCP/IP sockets can be used to connect to the server. The default value is normally /tmp
, but that can be changed at build time. Specifying this option is equivalent to setting the unix_socket_directories configuration parameter.
-
-l
Enables secure connections using SSL. IvorySQL must have been compiled with support for SSL for this option to be available.
-
-N max-connections
Sets the maximum number of client connections that this server will accept. The default value of this parameter is chosen automatically by initdb. Specifying this option is equivalent to setting the max_connections configuration parameter.
-
-p port
Specifies the TCP/IP port or local Unix domain socket file extension on which postgres
is to listen for connections from client applications. Defaults to the value of the PGPORT
environment variable, or if PGPORT
is not set, then defaults to the value established during compilation (normally 5432). If you specify a port other than the default port, then all client applications must specify the same port using either command-line options or PGPORT
.
-
-s
Print time information and other statistics at the end of each command. This is useful for benchmarking or for use in tuning the number of buffers.
-
-S
work-mem
Specifies the base amount of memory to be used by sorts and hash tables before resorting to temporary disk files.
-
-V
--version
Print the postgres version and exit.
-
--name=value
Sets a named run-time parameter; a shorter form of -c
.
-
--describe-config
This option dumps out the server’s internal configuration variables, descriptions, and defaults in tab-delimited COPY
format. It is designed primarily for use by administration tools.
-
-?
--help
Show help about postgres command line arguments, and exit.
3.12.4. Semi-Internal Options
The options described here are used mainly for debugging purposes, and in some cases to assist with recovery of severely damaged databases. There should be no reason to use them in a production database setup. They are listed here only for use by IvorySQL system developers. Furthermore, these options might change or be removed in a future release without notice.
-
-f
{ s | i | o | b | t | n | m | h }
Forbids the use of particular scan and join methods: s
and i
disable sequential and index scans respectively, o
, b
and t
disable index-only scans, bitmap index scans, and TID scans respectively, while n
, m
, and h
disable nested-loop, merge and hash joins respectively.Neither sequential scans nor nested-loop joins can be disabled completely; the -fs
and -fn
options simply discourage the optimizer from using those plan types if it has any other alternative.
-
-n
This option is for debugging problems that cause a server process to die abnormally. The ordinary strategy in this situation is to notify all other server processes that they must terminate and then reinitialize the shared memory and semaphores. This is because an errant server process could have corrupted some shared state before terminating. This option specifies that postgres
will not reinitialize shared data structures. A knowledgeable system programmer can then use a debugger to examine shared memory and semaphore state.
-
-O
Allows the structure of system tables to be modified. This is used by initdb
.
-
-P
Ignore system indexes when reading system tables, but still update the indexes when modifying the tables. This is useful when recovering from damaged system indexes.
-
-t
pa[rser] | pl[anner] | e[xecutor]
Print timing statistics for each query relating to each of the major system modules. This option cannot be used together with the -s
option.
-
-T
This option is for debugging problems that cause a server process to die abnormally. The ordinary strategy in this situation is to notify all other server processes that they must terminate and then reinitialize the shared memory and semaphores. This is because an errant server process could have corrupted some shared state before terminating. This option specifies that postgres
will stop all other server processes by sending the signal SIGSTOP
, but will not cause them to terminate. This permits system programmers to collect core dumps from all server processes by hand.
-
-v
protocol
Specifies the version number of the frontend/backend protocol to be used for a particular session. This option is for internal use only.
-
-W
seconds
A delay of this many seconds occurs when a new server process is started, after it conducts the authentication procedure. This is intended to give an opportunity to attach to the server process with a debugger.
3.12.5. Options for Single-User Mode
The following options only apply to the single-user mode (see Single-User Mode below).
-
--single
Selects the single-user mode. This must be the first argument on the command line.
-
database
Specifies the name of the database to be accessed. This must be the last argument on the command line. If it is omitted it defaults to the user name.
-
-E
Echo all commands to standard output before executing them.
-
-j
Use semicolon followed by two newlines, rather than just newline, as the command entry terminator.
-
-r
filename
Send all server log output to filename
. This option is only honored when supplied as a command-line option.
3.12.6. Environment
-
PGCLIENTENCODING
Default character encoding used by clients. (The clients can override this individually.) This value can also be set in the configuration file.
-
PGDATA
Default data directory location
-
PGDATESTYLE
Default value of the DateStyle run-time parameter. (The use of this environment variable is deprecated.)
-
PGPORT
Default port number (preferably set in the configuration file)
3.12.7. Diagnostics
A failure message mentioning semget
or shmget
probably indicates you need to configure your kernel to provide adequate shared memory and semaphores. You might be able to postpone reconfiguring your kernel by decreasing shared_buffers to reduce the shared memory consumption of IvorySQL, and/or by reducing max_connections to reduce the semaphore consumption.
A failure message suggesting that another server is already running should be checked carefully, for example by using the command
$ ps ax | grep postgres
or
$ ps -ef | grep postgres
depending on your system. If you are certain that no conflicting server is running, you can remove the lock file mentioned in the message and try again.
A failure message indicating inability to bind to a port might indicate that that port is already in use by some non-IvorySQL process. You might also get this error if you terminate postgres
and immediately restart it using the same port; in this case, you must simply wait a few seconds until the operating system closes the port before trying again. Finally, you might get this error if you specify a port number that your operating system considers to be reserved. For example, many versions of Unix consider port numbers under 1024 to be “trusted” and only permit the Unix superuser to access them.
3.12.8. Notes
The utility command pg_ctl can be used to start and shut down the postgres
server safely and comfortably.
If at all possible, do not use SIGKILL
to kill the main postgres
server. Doing so will prevent postgres
from freeing the system resources (e.g., shared memory and semaphores) that it holds before terminating. This might cause problems for starting a fresh postgres
run.
To terminate the postgres
server normally, the signals SIGTERM
, SIGINT
, or SIGQUIT
can be used. The first will wait for all clients to terminate before quitting, the second will forcefully disconnect all clients, and the third will quit immediately without proper shutdown, resulting in a recovery run during restart.
The SIGHUP
signal will reload the server configuration files. It is also possible to send SIGHUP
to an individual server process, but that is usually not sensible.
To cancel a running query, send the SIGINT
signal to the process running that command. To terminate a backend process cleanly, send SIGTERM
to that process. See also pg_cancel_backend
.
The postgres
server uses SIGQUIT
to tell subordinate server processes to terminate without normal cleanup. This signal should not be used by users. It is also unwise to send SIGKILL
to a server process — the main postgres
process will interpret this as a crash and will force all the sibling processes to quit as part of its standard crash-recovery procedure.
3.12.9. Bugs
The --
options will not work on FreeBSD or OpenBSD. Use -c
instead. This is a bug in the affected operating systems; a future release of IvorySQL will provide a workaround if this is not fixed.
3.12.10. Single-User Mode
To start a single-user mode server, use a command like
postgres --single -D /usr/local/pgsql/data other-options my_database
Provide the correct path to the database directory with -D
, or make sure that the environment variable PGDATA
is set. Also specify the name of the particular database you want to work in.
Normally, the single-user mode server treats newline as the command entry terminator; there is no intelligence about semicolons, as there is in psql. To continue a command across multiple lines, you must type backslash just before each newline except the last one. The backslash and adjacent newline are both dropped from the input command. Note that this will happen even when within a string literal or comment.
But if you use the -j
command line switch, a single newline does not terminate command entry; instead, the sequence semicolon-newline-newline does. That is, type a semicolon immediately followed by a completely empty line. Backslash-newline is not treated specially in this mode. Again, there is no intelligence about such a sequence appearing within a string literal or comment.
In either input mode, if you type a semicolon that is not just before or part of a command entry terminator, it is considered a command separator. When you do type a command entry terminator, the multiple statements you’ve entered will be executed as a single transaction.
To quit the session, type EOF (Control+D, usually). If you’ve entered any text since the last command entry terminator, then EOF will be taken as a command entry terminator, and another EOF will be needed to exit.
Note that the single-user mode server does not provide sophisticated line-editing features (no command history, for example). Single-user mode also does not do any background processing, such as automatic checkpoints or replication.
3.12.11. Examples
To start postgres
in the background using default values, type:
$ nohup postgres >logfile 2>&1 </dev/null &
To start postgres
with a specific port, e.g., 1234:
$ postgres -p 1234
To connect to this server using psql, specify this port with the -p option:
$ psql -p 1234
or set the environment variable PGPORT
:
$ export PGPORT=1234
$ psql
Named run-time parameters can be set in either of these styles:
$ postgres -c work_mem=1234
$ postgres --work-mem=1234
Either form overrides whatever setting might exist for work_mem
in IvorySQL.conf
. Notice that underscores in parameter names can be written as either underscore or dash on the command line. Except for short-term experiments, it’s probably better practice to edit the setting in IvorySQL.conf
than to rely on a command-line switch to set a parameter.