Domanda

I want to make a search for all .fits files that contain a certain text in their name and then copy them to a directory.

I can use a command called fetchKeys to list the files that contain say 'foo'

The command looks like this : fetchKeys -t 'foo' -F | grep .fits

This returns a list of .fits files that contain 'foo'. Great! Now I want to copy all of these to a directory /path/to/dir. There are too many files to do individually , I need to copy them all using one command.

I'm thinking something like:

fetchKeys -t 'foo' -F | grep .fits > /path/to/dir

or

cp fetchKeys -t 'foo' -F | grep .fits /path/to/dir

but of course neither of these works. Any other ideas?

È stato utile?

Soluzione 3

The xargs tool can execute a command for every line what it gets from stdin. This time, we execute a cp command:

fetchkeys -t 'foo' -F | grep .fits | xargs -P 1 -n 500 --replace='{}' cp -vfa '{}' /path/to/dir

xargs is a very useful tool, although its parametrization is not really trivial. This command reads in 500 .fits files, and calls a single cp command for every group. I didn't tested it to deep, if it doesn't go, I'm waiting your comment.

Altri suggerimenti

If this is on Linux/Unix, can you use the find command? That seems very much like fetchkeys.

$ find . -name "*foo*.fit" -type f -print0 | while read -r -d $'\0' file
do
    basename=$(basename $file)
    cp "$file" "$fits_dir/$basename"
done

The find command will find all files that match *foo*.fits in their name. The -type f says they have to be files and not directories. The -print0 means print out the files found, but separate them with the NUL character. Normally, the find command will simply return a file on each line, but what if the file name contains spaces, tabs, new lines, or even other strange characters?

The -print0 will separate out files with nulls (\0), and the read -d $'\0' file means to read in each file separating by these null characters. If your files don't contain whitespace or strange characters, you could do this:

$ find . -name "*foo*.fit" -type f | while read file
do
    basename=$(basename $file)
    cp "$file" "$fits_dir/$basename"
done

Basically, you read each file found with your find command into the shell variable file. Then, you can use that to copy that file into your $fits_dir or where ever you want.

Again, maybe there's a reason to use fetchKeys, and it is possible to replace that find with fetchKeys, but I don't know that fetchKeys command.

Copy all files with the name containing foo to a certain directory:

find . -name "*foo*.fit" -type f -exec cp {} "/path/to/dir/" \; 

Copy all files themselves containing foo to a certain directory (solution without xargs):

for f in `find . -type f  -exec grep -l foo {} \;`; do cp "$f" /path/to/dir/; done

The find command has very useful arguments -exec, -print, -delete. They are very robust and eliminate the need to manually process the file names. The syntax for -exec is: -exec (what to do) \;. The name of the file currently processed will be substituted instead of the placeholder {}.

Other commands that are very useful for such tasks are sed and awk.

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top