Question

I have some files that I want to copy to s3. Rather than doing one call per file, I want to include them all in one single call (to be as efficient as possible).

However, I only seem to get it to work if I add the --recursive flag, which makes it look in all children directories (all files I want are in the current directory only)

so this is the command I have now, that works

aws s3 cp --dryrun . mybucket --recursive --exclude * --include *.jpg

but ideally I would like to remove the --recursive to stop it traversing, e.g. something like this (which does not work)

aws s3 cp --dryrun . mybucket --exclude * --include *.jpg

(I have simplified the example, in my script I have several different include patterns)

Was it helpful?

Solution

AWS CLI's S3 wildcard support is a bit primitive, but you could use multiple --exclude options to accomplish this. Note: the order of includes and excludes is important.

aws s3 cp --dryrun . s3://mybucket --recursive --exclude "*" --include "*.jpg" --exclude "*/*"

OTHER TIPS

Try the command:

aws s3 cp --dryrun . s3://mybucket --recursive --exclude "*/"

Hope it help.

I tried the suggested answers and could not get aws to skip nested folders. Saw some weird outputs about calculating size, and 0 size objects, despite using the exclude flag.

I eventually gave up on the --recursive flag and used bash to perform a single s3 upload for each file matched. Remove --dryrun once you're ready to roll!

for i in *.{jpg,jpeg}; do aws --dryrun s3 cp ${i} s3://your-bucket/your-folder/${i}; done

I would suggest to go for a utility called s4cmd which provides us unix like file system operations and it also allows us to include the wild cards https://github.com/bloomreach/s4cmd

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top