Domanda

Trying to download S3 directory to local machine using s3cmd. I'm using the command:

s3cmd sync --skip-existing s3://bucket_name/remote_dir ~/local_dir

But if I restart downloading after interruption s3cmd doesn't skip existing local files downloaded earlier and rewrites them. What is wrong with the command?

È stato utile?

Soluzione

Use boto-rsync instead. https://github.com/seedifferently/boto_rsync

It correctly syncs only new/changed files from s3 to the local directory.

Altri suggerimenti

I had the same problem and found the solution in comment # 38 from William Denniss there http://s3tools.org/s3cmd-sync

If you have:

$s3cmd sync —verbose s3://mybucket myfolder

Change it to:

$s3cmd sync —verbose s3://mybucket/ myfolder/   # note the trailing slash

Then, the MD5 hashes are compared and everything works correctly! —skip-existing works as well.

To recap, both —skip-existing and md5 checks won’t happen if you use the first command, and both work if you use the second (I made a mistake in my previous post, as I was testing with 2 different directories).

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top