Frage

How do I process a file or multiple files in place with bash?

So: read from file x, do some processing (e.g. search-replace) and write to file x.

I know that with sed you can do: sed -i "" "s/original/replacement/g", but sometimes sed doesn't cut it and I need a different tool, which lacks an -i ""-like option.

I recently discovered all by myself that I could do the following:

( BUFFER="`cat FILENAME`"; echo "$BUFFER" > FILENAME )

which uses an environment variable to store the contents of the file. For very large files this is probably not efficient and environment variables might be limited in capacity.

Is there a different way?

I also read this answer, but I'm looking for a bash solution. Open source tools that somehow fit the task perfectly are also welcome.

War es hilfreich?

Lösung 2

You can store the output of the command in a temp file, and then move the temp file over the original. Example:

command <file >tmp && mv tmp file

Andere Tipps

There are many scripting tools around like awk, perl, ruby, python, but with large files in bash it would just be better to store the output first on a temporary file. Then save it back:

while IFS= read -r LINE; do
    case "$LINE" in
    something_to_exclude|another_to_exclude)
        ;;
    yet_another_to_exclude)
        ;;
    *)
        # This is fine to include.
        echo "$LINE"
        ;;
    eac
done < "$FILENAME" > "$FILENAME".temp

cat "$FILENAME".temp > "$FILENAME"
rm "$FILENAME".temp
Lizenziert unter: CC-BY-SA mit Zuschreibung
Nicht verbunden mit StackOverflow
scroll top