How to resume scp with partially copied files?

BashRsyncScpResume

Bash Problem Overview


I use scp shell command to copy huge folder of files.

But at some point of time I had to kill the running command (by Ctrl+C or kill).

To my understanding scp copied files sequentially, so there should be only one partially copied file.

How can same scp command be resumed to not overwrite successfully copied files and to properly handle partially copied files?

P.S. I know I can do this kind of stuff in rsync, but scp is faster for me for some reason and I use it instead.

Bash Solutions


Solution 1 - Bash

You should use rsync over ssh

rsync -P -e ssh remoteuser@remotehost:/remote/path /local/path

The key option is -P, which is the same as --partial --progress

> By default, rsync will delete any partially transferred file if the transfer is interrupted. In some circumstances it is more desirable to keep partially transferred files. Using the --partial option tells rsync to keep the partial file which should make a subsequent transfer of the rest of the file much faster.

Other options, such -a (for archive mode), and -z (to enable compression) can also be used.

The manual: https://download.samba.org/pub/rsync/rsync.html

Solution 2 - Bash

An alternative to rsync:

Use sftp with option -r (recursively copy entire directories) and option -a of sftp's get command "resume partial transfers of existing files."

Prerequisite: Your sftp implementation has already a get with -a option.

Example:

Copy directory /foo/bar from remote server to your local current directory. Directory bar will be created in your local current directory.

echo "get -a /foo/bar" | sftp -r user@remote_server

Solution 3 - Bash

Since OpenSSH 6.3, you can use reget command in sftp.

It has the same syntax as the get, except that it starts a transfer from the end of an existing local file.

echo "reget /file/path" | sftp -r user@server_name

The same effect has -a switch to the get command or global command-line -a switch of sftp.

Solution 4 - Bash

Another possibility is to try to salvage the scp you've already started when it stalls.

ctrl+z to background and stop it, then ssh over to the receiving server and login, then exit. Now fg the scp process and watch it resume from 'stalled'!

Solution 5 - Bash

When rsync stalls as well after couple of seconds when initially running fine I ended up with the following brute force solution to start and stop an re-start the download every 60s:

cat run_me.sh
#!/bin/bash
while [ 1 ]
do
  rsync --partial --progress --rsh=ssh user@host:/path/file.tgz file.tgz &
  TASK_PID=$!
  sleep 60
  kill $TASK_PID
  sleep 2
done

Solution 6 - Bash

You can make use of the -rsh and -P options of rsync. -P is for partial download and -rsh indicates transfer is over ssh procotol.

The complete command would be : rsync -P -rsh remoteuser@remotehost:/remote/path /local/path

Solution 7 - Bash

I got the same issue yesterday, transfering a huge sql dump over via scp, I got lucky with wget --continue the_url

That blog post explains it quite well http://www.cyberciti.biz/tips/wget-resume-broken-download.html basically:

wget --continue url

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionBohdanView Question on Stackoverflow
Solution 1 - BashjordiView Answer on Stackoverflow
Solution 2 - BashCyrusView Answer on Stackoverflow
Solution 3 - BashMartin PrikrylView Answer on Stackoverflow
Solution 4 - BashPuckfistView Answer on Stackoverflow
Solution 5 - Bashremigiusz boguszewiczView Answer on Stackoverflow
Solution 6 - BashBunny RabbitView Answer on Stackoverflow
Solution 7 - BashArnaud BouchotView Answer on Stackoverflow