If you are just updating metadata (and not the body/content itself) you probably want to use copy instead of save. This is perhaps non-obvious, but that keeps the operation on the S3 side so that it will be MUCH faster.
The signature for copy looks like:
copy(target_directory_key, target_file_key, options = {})
So I think my proposed solution should look (more or less) like this:
directory.files.each do |f|
content_type = case f.key.split(".").last
when "jpg"
"image/jpeg"
when "mov"
"video/quicktime"
end
options = {
'Content-Type' => content_type,
'x-amz-metadata-directive' => 'REPLACE'
}
puts "copied!" if f.copy(f.directory, f.key, options)
end
That should basically tell S3 "copy my file over the top of itself, but change this header". That way you don't have to download/reupload the file. This is probably the approach you want.
So, solution aside, still seems like you found a bug. Could you include an example of what you mean by "individually update each file"? Just want to make sure I know exactly what you mean and that I can see the working/non-working cases side by side. Also, how/why do you think it isn't updating the content-type (it might actually be updating it, but just not displaying the updated value correctly, or something like that). Bonus points if you can create an issue here to make sure I don't forget to address it: https://github.com/fog/fog/issues?state=open