How to pass in password to pg_dump?

BashPostgresqlShellCrontab

Bash Problem Overview


I'm trying to create a cronjob to back up my database every night before something catastrophic happens. It looks like this command should meet my needs:

0 3 * * * pg_dump dbname | gzip > ~/backup/db/$(date +%Y-%m-%d).psql.gz

Except after running that, it expects me to type in a password. I can't do that if I run it from cron. How can I pass one in automatically?

Bash Solutions


Solution 1 - Bash

Create a .pgpass file in the home directory of the account that pg_dump will run as.

The format is:

hostname:port:database:username:password

Then, set the file's mode to 0600. Otherwise, it will be ignored.

chmod 600 ~/.pgpass

See the Postgresql documentation libpq-pgpass for more details.

Solution 2 - Bash

Or you can set up crontab to run a script. Inside that script you can set an environment variable like this: export PGPASSWORD="$put_here_the_password"

This way if you have multiple commands that would require password you can put them all in the script. If the password changes you only have to change it in one place (the script).

And I agree with Joshua, using pg_dump -Fc generates the most flexible export format and is already compressed. For more info see: pg_dump documentation

E.g.

# dump the database in custom-format archive
pg_dump -Fc mydb > db.dump

# restore the database
pg_restore -d newdb db.dump

Solution 3 - Bash

If you want to do it in one command:

PGPASSWORD="mypass" pg_dump mydb > mydb.dump

Solution 4 - Bash

For a one-liner, like migrating a database you can use --dbname followed by a connection string (including the password) as stated in the pg_dump manual

In essence.

pg_dump --dbname=postgresql://username:[email protected]:5432/mydatabase

Note: Make sure that you use the option --dbname instead of the shorter -d and use a valid URI prefix, postgresql:// or postgres://.

The general URI form is:

postgresql://[user[:password]@][netloc][:port][/dbname][?param1=value1&...]

Best practice in your case (repetitive task in cron) this shouldn't be done because of security issues. If it weren't for .pgpass file I would save the connection string as an environment variable.

export MYDB=postgresql://username:[email protected]:5432/mydatabase

then have in your crontab

0 3 * * * pg_dump --dbname=$MYDB | gzip > ~/backup/db/$(date +%Y-%m-%d).psql.gz

Solution 5 - Bash

This one liner helps me while creating dump of a single database.

PGPASSWORD="yourpassword" pg_dump -U postgres -h localhost mydb > mydb.pgsql

Solution 6 - Bash

$ PGPASSWORD="mypass" pg_dump -i -h localhost -p 5432 -U username -F c -b -v -f dumpfilename.dump databasename

Solution 7 - Bash

You can pass a password into pg_dump directly by using the following:

pg_dump "host=localhost port=5432 dbname=mydb user=myuser password=mypass" > mydb_export.sql

Solution 8 - Bash

@Josue Alexander Ibarra answer works on centos 7 and version 9.5 if --dbname is not passed.

pg_dump postgresql://username:[email protected]:5432/mydatabase 

Solution 9 - Bash

Note that, in windows, the pgpass.conf file must be in the following folder:

%APPDATA%\postgresql\pgpass.conf

if there's no postgresql folder inside the %APPDATA% folder, create it.

the pgpass.conf file content is something like:

localhost:5432:dbname:dbusername:dbpassword

cheers

Solution 10 - Bash

As detailed in this blog post , there are two ways to non interactively provide a password to PostgreSQL utilities such as the "pg_dump" command: using the ".pgpass" file or using the "PGPASSWORD" environment variable.

Solution 11 - Bash

Correct me if I'm wrong, but if the system user is the same as the database user, PostgreSQL won't ask for the password - it relies on the system for authentication. This might be a matter of configuration.

Thus, when I wanted the database owner postgres to backup his databases every night, I could create a crontab for it: crontab -e -u postgres. Of course, postgres would need to be allowed to execute cron jobs; thus it must be listed in /etc/cron.allow, or /etc/cron.deny must be empty.

Solution 12 - Bash

Backup over ssh with password using temporary .pgpass credentials and push to S3:

#!/usr/bin/env bash
cd "$(dirname "$0")"

DB_HOST="*******.*********.us-west-2.rds.amazonaws.com"
DB_USER="*******"
SSH_HOST="[email protected]_domain.com"
BUCKET_PATH="bucket_name/backup"

if [ $# -ne 2 ]; then
	echo "Error: 2 arguments required"
	echo "Usage:"
	echo "  my-backup-script.sh <DB-name> <password>"
	echo "  <DB-name> = The name of the DB to backup"
	echo "  <password> = The DB password, which is also used for GPG encryption of the backup file"
	echo "Example:"
	echo "  my-backup-script.sh my_db my_password"
	exit 1
fi

DATABASE=$1
PASSWORD=$2

echo "set remote PG password .."
echo "$DB_HOST:5432:$DATABASE:$DB_USER:$PASSWORD" | ssh "$SSH_HOST" "cat > ~/.pgpass; chmod 0600 ~/.pgpass"
echo "backup over SSH and gzip the backup .."
ssh "$SSH_HOST" "pg_dump -U $DB_USER -h $DB_HOST -C --column-inserts $DATABASE" | gzip > ./tmp.gz
echo "unset remote PG password .."
echo "*********" | ssh "$SSH_HOST" "cat > ~/.pgpass"
echo "encrypt the backup .."
gpg --batch --passphrase "$PASSWORD" --cipher-algo AES256 --compression-algo BZIP2 -co "$DATABASE.sql.gz.gpg" ./tmp.gz

# Backing up to AWS obviously requires having your credentials to be set locally
# EC2 instances can use instance permissions to push files to S3
DATETIME=`date "+%Y%m%d-%H%M%S"`
aws s3 cp ./"$DATABASE.sql.gz.gpg" s3://"$BUCKET_PATH"/"$DATABASE"/db/"$DATETIME".sql.gz.gpg
# s3 is cheap, so don't worry about a little temporary duplication here
# "latest" is always good to have because it makes it easier for dev-ops to use
aws s3 cp ./"$DATABASE.sql.gz.gpg" s3://"$BUCKET_PATH"/"$DATABASE"/db/latest.sql.gz.gpg

echo "local clean-up .."
rm ./tmp.gz
rm "$DATABASE.sql.gz.gpg"

echo "-----------------------"
echo "To decrypt and extract:"
echo "-----------------------"
echo "gpg -d ./$DATABASE.sql.gz.gpg | gunzip > tmp.sql"
echo

Just substitute the first couple of config lines with whatever you need - obviously. For those not interested in the S3 backup part, take it out - obviously.

This script deletes the credentials in .pgpass afterward because in some environments, the default SSH user can sudo without a password, for example an EC2 instance with the ubuntu user, so using .pgpass with a different host account in order to secure those credential, might be pointless.

Solution 13 - Bash

For Windows the pgpass.conf file should exist on path:

%APPDATA%\postgresql\pgpass.conf

On my Windows 10 absolute path it is:

C:\Users\Ognjen\AppData\Roaming\postgresql\pgpass.conf

Note: If there is no postgresql folder in %APPDATA%, create one with pgpass.conf file inside it.

Content of pgpass.conf could be:

*:5432:*:*:myDbPassword

Or more specific content could be:

localhost:5432:dbName:username:password

Note: Content of pgpass.conf must NOT end with white spaces (after password) or the error will occur.

Solution 14 - Bash

A secure way of passing the password is to store it in .pgpass file

Content of the .pgpass file will be in the format:

db_host:db_port:db_name:db_user:db_pass

#Eg
localhost:5432:db1:admin:tiger
localhost:5432:db2:admin:tiger

Now, store this file in the home directory of the user with permissions u=rw (0600) or less

To find the home directory of the user, use echo $HOME

Restrict permissions of the file chmod 0600 /home/ubuntu/.pgpass

Solution 15 - Bash

You just need to open pg_hba.conf and sets trust in all methods. That's works for me. Therefore the security is null.

Solution 16 - Bash

In windows you can set the variable with the password before use pg_dump.exe, and can automate all in a bat file, for example:

C:\>SET PGPASSWORD=dbpass
C:\>"folder_where_is_pg_dump\pg_dump.exe" -f "dump_file" -h "db_host" -U db_usr --schema "db_schema" "db_name"

Solution 17 - Bash

Another (probably not secure) way to pass password is using input redirection i.e. calling

pg_dump [params] < [path to file containing password]

Solution 18 - Bash

the easiest way in my opinion, this: you edit you main postgres config file: pg_hba.conf there you have to add the following line:

host <you_db_name> <you_db_owner> 127.0.0.1/32 trust

and after this you need start you cron thus:

pg_dump -h 127.0.0.1 -U <you_db_user> <you_db_name> | gzip > /backup/db/$(date +%Y-%m-%d).psql.gz

and it worked without password

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionmpenView Question on Stackoverflow
Solution 1 - BasharaqnidView Answer on Stackoverflow
Solution 2 - BashMaxView Answer on Stackoverflow
Solution 3 - BashgitaarikView Answer on Stackoverflow
Solution 4 - BashJosue Alexander IbarraView Answer on Stackoverflow
Solution 5 - BashRajan Verma - AarvyView Answer on Stackoverflow
Solution 6 - BashFrancisco LuzView Answer on Stackoverflow
Solution 7 - BashLarry SpenceView Answer on Stackoverflow
Solution 8 - BashJauyzedView Answer on Stackoverflow
Solution 9 - BashFernando Meneses GomesView Answer on Stackoverflow
Solution 10 - Bashmanfall19View Answer on Stackoverflow
Solution 11 - BashTobiasView Answer on Stackoverflow
Solution 12 - BashStartupGuyView Answer on Stackoverflow
Solution 13 - BashognjenklView Answer on Stackoverflow
Solution 14 - BashsaintlyzeroView Answer on Stackoverflow
Solution 15 - BashstefanozView Answer on Stackoverflow
Solution 16 - Bashmiguel_vivancoView Answer on Stackoverflow
Solution 17 - BashszymondView Answer on Stackoverflow
Solution 18 - BashDofriView Answer on Stackoverflow