How to port data-only volumes from one host to another?
DockerDocker Problem Overview
As described in the Docker documentation on Working with Volumes there is the concept of so-called data-only containers, which provide a volume that can be mounted into multiple other containers, no matter whether the data-only container is actually running or not.
Basically, this sounds awesome. But there is one thing I do not understand.
These volumes (which do not explicitly map to a folder on the host for portability reasons, as the documentation states) are created and managed by Docker in some internal folder on the host (/var/docker/volumes/…
).
Supposed I use such a volume, and then I need to migrate it from one host to another - how do I port the volume? AFAICS it has a unique ID - can I just go and copy the volume and its according data-only container to a new host? How do I find out which files to copy? Or is there some support built-in to Docker that I did not discover yet?
Docker Solutions
Solution 1 - Docker
The official answer is available in the section "Backup, restore, or migrate data volumes":
BACKUP:
sudo docker run --rm --volumes-from DATA -v $(pwd):/backup busybox tar cvf /backup/backup.tar /data
--rm
: remove the container when it exits--volumes-from DATA
: attach to the volumes shared by the DATA container-v $(pwd):/backup
: bind mount the current directory into the container; to write the tar file tobusybox
: a small simpler image - good for quick maintenancetar cvf /backup/backup.tar /data
: creates an uncompressed tar file of all the files in the /data directory
RESTORE:
# create a new data container
$ sudo docker create -v /data --name DATA2 busybox true
# untar the backup files into the new container᾿s data volume
$ sudo docker run --rm --volumes-from DATA2 -v $(pwd):/backup busybox tar xvf /backup/backup.tar
data/
data/sven.txt
# compare to the original container
$ sudo docker run --rm --volumes-from DATA -v `pwd`:/backup busybox ls /data
sven.txt
Solution 2 - Docker
Extending the official answer from Docker docs and the top answer here, you can have following functions in your .bashrc
or .zshrc
:
# backup files from a docker volume into /tmp/backup.tar.gz
function docker-volume-backup-compressed() {
docker run --rm -v /tmp:/backup --volumes-from "$1" debian:jessie tar -czvf /backup/backup.tar.gz "${@:2}"
}
# restore files from /tmp/backup.tar.gz into a docker volume
function docker-volume-restore-compressed() {
docker run --rm -v /tmp:/backup --volumes-from "$1" debian:jessie tar -xzvf /backup/backup.tar.gz "${@:2}"
echo "Double checking files..."
docker run --rm -v /tmp:/backup --volumes-from "$1" debian:jessie ls -lh "${@:2}"
}
# backup files from a docker volume into /tmp/backup.tar
function docker-volume-backup() {
docker run --rm -v /tmp:/backup --volumes-from "$1" busybox tar -cvf /backup/backup.tar "${@:2}"
}
# restore files from /tmp/backup.tar into a docker volume
function docker-volume-restore() {
docker run --rm -v /tmp:/backup --volumes-from "$1" busybox tar -xvf /backup/backup.tar "${@:2}"
echo "Double checking files..."
docker run --rm -v /tmp:/backup --volumes-from "$1" busybox ls -lh "${@:2}"
}
Note that the backup is saved into /tmp
, so you can move the backup file saved there between docker hosts.
There is also two pairs of backup/restore aliases. One using compression and debian:jessie and other with no compression but with busybox. Favor using compression if the files to backup are big.
Solution 3 - Docker
You can export the volume to tar and transfer to another machine. And import the data with tar on the second machine. This does not rely on implementation details of the volumes.
# you can list shared directories of the data container
docker inspect <data container> | grep "/vfs/dir/"
# you can export data container directory to tgz
docker run --cidfile=id.tmp --volumes-from <data container> ubuntu tar -cO <volume path> | gzip -c > volume.tgz
# clean up: remove exited container used for export and temporary file
docker rm `cat id.tmp` && rm -f id.tmp
Solution 4 - Docker
I'll add another recent tool here from IBM which is actually made for the volume migration from one container host to another. This is a currently on-going project. So, you may find a different version with additional features in future.
Cargo was developed to migrate containers from one host to another host along with their data with minimal downtime. Cargo uses data federation capabilities of union filesystem to create a unified view of data (mainly the root file system) across the source and target hosts. This allows Cargo to start up a container almost immediately (within milliseconds) on the target host as the data from source root file system gets copied to target hosts either on-demand (using a copy-on-write (COW) partition) or lazily in the background (using rsync).
Important points are:
- a
centralized
server handles the migration process
The link to the project is given here:
https://github.com/nadgowdas/cargo
Solution 5 - Docker
In case your machines are in different VPCs or you want to copy from/to local machine (like in my case) you can use dvsync I created. It's basically ngrok combined with rsync
over SSH packaged into two small (both ~25MB) images. First, you start the dvsync-server
on a machine you want to copy data from (You'll need the NGROK_AUTHTOKEN
which can be obtained from ngrok dashboard):
$ docker run --rm -e NGROK_AUTHTOKEN="$NGROK_AUTHTOKEN" \
--mount source=MY_VOLUME,target=/data,readonly \
quay.io/suda/dvsync-server
Then you can start the dvsync-client
on the machine you want to copy the files to, passing the DVSYNC_TOKEN
shown by the server:
docker run -e DVSYNC_TOKEN="$DVSYNC_TOKEN" \
--mount source=MY_TARGET_VOLUME,target=/data \
quay.io/suda/dvsync-client
Once the copying will be done, the client will exit. This works with Docker CLI, Compose, Swarm and Kubernetes as well.
Solution 6 - Docker
Here's a one-liner in case it can be established an SSH connection between the machines:
docker run --rm -v <SOURCE_DATA_VOLUME_NAME>:/from alpine ash -c "cd /from ; tar -cf - . " | ssh <TARGET_HOST> 'docker run --rm -i -v <TARGET_DATA_VOLUME_NAME>:/to alpine ash -c "cd /to ; tar -xpvf - " '
Credits go to Guido Diepen's post.
Solution 7 - Docker
Adding an answer here as I don't have reputation to comment. While all the above answers have helped me, I imagine there may be others like me who are also looking to copy the contents of a backup.tar
file into a named docker volume
on the collaborator's machine. I don't see this discussed specifically above or in docker volumes documentation.
Why would you do want to do copy the backup.tar
file into a named docker volume
?
This could be helpful in a scenario where a named docker volume
has been specified inside an existing docker-compose.yml
file to be used by some of the containers.
Copying contents of backup.tar
into a named docker volume
-
On host machine, follow the steps in accepted answer or docker volumes documentation to create a
backup.tar
file and push it to some repository. -
Pull
backup.tar
into collaborator's machine from repository. -
On collaborator's machine, create a temporary container and a named docker volume.
docker run -v named_docker_volume:/dbdata --name temp_db_container ubuntu /bin/bash
-
--name temp_db_container
: Create a container calledtemp_db_container
-
ubuntu /bin/bash
: Use aubuntu
image to buildtemp_db_container
with starting command of/bin/bash
-
-v named_docker_volume:/dbdata
: Mount the/dbdata
folder oftemp_db_container
into a docker volume callednamed_docker_volume
. We use this specifically named volumenamed_docker_volume
to match with volume name specified in ourdocker-compose.yml
file.
- On collaborator's machine, Copy over the contents of
backup.tar
into the named docker volume.
docker run --rm --volumes-from temp_db_container -v $(pwd):/backup ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1"
--volumes-from temp_db_container
:temp_db_container
container's/dbdata
folder was mapped tonamed_docker_volume
volume in previous step. So any file that gets stored in/dbdata
folder will immediately get copied over tonamed_docker_volume
docker volume.-v $(pwd):/backup
: map the local machine's present working directory to the/backup
folder located insidetemp_db_container
ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1"
: Untar thebackup.tar
file and store the untarred contents inside/dbdata
folder.
- On collaborator's machine, clear the temporary container
temp_db_container
docker rm temp_db_container
Solution 8 - Docker
Adapted from the accepted answer, but gives more flexibility in that you can use it in bash pipeline:
#!/bin/bash
if [ $# != 2 ]; then
echo Usage "$0": volume /path/of/the/dir/in/volume/to/backup
exit 1
fi
if [ -t 1 ]; then
echo The output of the cmd is binary data "(tar)", \
and it should be redirected instead of printed to terminal
exit 1
fi
volume="$1"
path="$2"
exec docker run --rm --mount type=volume,src="$volume",dst=/mnt/volume/ alpine tar cf - . -C /mnt/volume/"$path"
If you want to backup the volume periodically and incrementally, then you can use the following script:
#!/bin/bash
if [ $# != 3 ]; then
echo Usage "$0": volume /path/of/the/dir/in/volume/to/backup /path/to/put/backup
exit 1
fi
volume="$1"
volume_path="$2"
path="$3"
if [[ "$path" =~ ^.*/$ ]]; then
echo "The 3rd argument shouldn't end in '/', otherwise rsync would not behave as expected"
exit 1
fi
container_name="docker-backup-rsync-service-$RANDOM"
docker run --rm --name="$container_name" -d -p 8738:873 \
--mount type=volume,src="$volume",dst=/mnt/volume/ \
nobodyxu/rsyncd
echo -e '\nStarting syncing...'
rsync --info=progress2,stats,symsafe -aHAX --delete \
"rsync://localhost:8738/root/mnt/volume/$volume_path/" "$path"
exit_status=$?
echo -e '\nStopping the rsyncd docker...'
docker stop -t 1 "$container_name"
exit $exit_status
It utilizes rsync
's server and client functionality to directly sync the dir between volume and your host dir.
Solution 9 - Docker
I was dissatisfied with the answer using tar
. I decided to take matters into my own hands. As I am going to be syncing the data often, and it's going to be big, I wanted specifically to use rsync
. Using tar
to send all the data every time would be just a waste of time and transfer.
After days spent on how to solve the problem of communicating between two remote docker containers, I finally got a solution using socat
.
- run two docker containers - one on the source the other on destination, each with one volume mounted - the source volume and destination volume.
- run
rsync --deamon
on one of the containers that will stream/load data from the volume - run
docker exec source_container socat - TCP:localhost
and rundocker exec desintation_container socat TCP-LISTEN:rsync -
and connect stdin and stdout of both these together. So onesocat
connects torsync --daemon
and redirects data from/to stdout/stdin, the othersocat
listens on:rsync
port (port 873) and redirect to/from stdin/stdout. Then connect them together, so basically we pipe data from one container port to the other. - then run on the other of volumes
rsync
client that would connect tolocalhost:rsync
, effective connecting via "socat
pipe" to thersync --daemon
.
Basically, it works like this:
log "Running both destination and source containers"
src_did=$(
env DOCKER_HOST=$src_docker_host docker run --rm -d -i -v \
"$src_volume":/data:ro -w /data alpine_with_rsync_and_socat\
sleep infinity
)
dst_did=$(
env DOCKER_HOST=$dst_docker_host docker run --rm -d -i -v \
"$dst_volume":/data:rw -w /data alpine_with_rsync_and_socat \
sleep infinity
)
log "Running rsyncd on destination container"
env DOCKER_HOST=$dst_docker_host docker exec "$dst_did" sh -c "
cat <<EOF > /etc/rsyncd.conf &&
uid = root
gid = root
use chroot = no
max connections = 1
numeric ids = yes
reverse lookup = no
[data]
path = /data/
read only = no
EOF
rsync --daemon
"
log "Setup rsync socat forwarding between containers"
{
coproc { env DOCKER_HOST=$dst_docker_host docker exec -i "$dst_did" \
socat -T 10 - TCP:localhost:rsync,forever; }
env DOCKER_HOST=$src_docker_host docker exec -i "$src_did" \
socat -T 10 TCP-LISTEN:rsync,forever,reuseaddr - <&"${COPROC[0]}" >&"${COPROC[1]}"
} &
log "Running rsync on source that will connect to destination"
env DOCKER_HOST=$src_docker docker exec -e RSYNC_PASSWORD="$g_password" -w /data "$src_did" \
rsync -aivxsAHSX --progress /data/ rsync://root@localhost/data
Another the really nice thing about that approach, is that you can copy data between two remote hosts, without ever storing the data locally. I also share the script ,docker-rsync-volumes
that I've written around this idea. With that script, copying volume from two remote hosts is just simple ,docker-rsync-volumes --delete -f ssh://user@productionserver grafana_data -t ssh://user@backupserver grafana_data_backup
.