Question

Okay, so I'm in a situation where I'd really like to be using either a co-process via coproc or via redirection such as <(some command), but unfortunately I'm limited to bash 3.2 in one of the my target environments, which means I'm limited in what I can do.

The reason I need a co-process is that I need to read line-by-line from one file, while looping over another.

Currently I'm using exec <6 /foo/bar to create keep a file open for reading so that I can do read line <&6 whenever I need more input. This works fine, but it only works on plain-text files, however really I'd like to keep my file(s) compressed, rather than decompressing them before running my script.

I also need to be able to do the same for writing to a new, compressed file without having to waste space writing in plain-text then compressing afterwards.

So… are there any alternatives available in bash 3? As I've noted, I'm already in a loop over another file, so I don't have the option of just piping my output into gzip (or piping zcat into my loop) as I need to do this independently of my loop.

To try to give an example, here's a stripped down version of what I'm doing now:

# Decompress compressed match-file
gzip -dc /foo/compressed.gz > /tmp/match

# Setup file handles (to keep files open for reading/writing)
exec 5< /tmp/match
exec 6> /tmp/matches

# Loop over input file (/foo/bar) for matches
read next_match <&5
while read line; do
    if [ "$line" = "$next_match" ]; then
        read next_match <&5
        echo "$line" >&6
    fi

    echo "$line"
done < /foo/bar

# Close file handles
exec <5&-
exec 6>&-
rm /tmp/match

# Compress matches and overwrite old match file
gzip -cf9 /tmp/matches /foo/compressed.gz
rm /tmp/matches

Forgive any typos, and the general uselessness of the actual script, I just wanted to keep it fairly simple. As you can see, while it works fine, it's not exactly optimal thanks to the wasteful plain-text files.

Was it helpful?

Solution

You might want to use mknod to create pipes and let gzip write/read in background processes. The following seems to work for me:

#!/bin/bash

# create test files (one character per line)
echo abcdefgh | grep -o . | gzip > /tmp/foo.gz
echo aafbchddjjklsefksi | grep -o . > /tmp/bar

# create pipes for zipping an unzipping
PIPE_GUNZIP=/tmp/$$.gunzip
PIPE_GZIP=/tmp/$$.gzip
mkfifo "$PIPE_GUNZIP"
mkfifo "$PIPE_GZIP"

# use pipes as endpoints for gzip / gunzip
gzip -dc /tmp/foo.gz > "$PIPE_GUNZIP" &
GUNZIP_PID=$!
gzip -c9 > /tmp/foo.gz.INCOMPLETE < "$PIPE_GZIP" &
GZIP_PID=$!

exec 5< "$PIPE_GUNZIP"
exec 6> "$PIPE_GZIP"

read next_match <&5
while read line; do
    if [ "$line" = "$next_match" ]; then
        read next_match <&5
        echo "$line" >&6
    fi

    echo "$line"
done < /tmp/bar

# Close file handles
exec 5<&-
exec 6>&-

# wait for gzip to terminate, replace input with output, clean up
wait $GZIP_PID
mv /tmp/foo.gz.INCOMPLETE /tmp/foo.gz
rm "$PIPE_GZIP"

# wait for gunzip to terminate, clean up
wait $GUNZIP_PID
rm "$PIPE_GUNZIP"

# check result
ls -l /tmp/{foo,bar}*
gzip -dc /tmp/foo.gz

OTHER TIPS

Since process substitution is available in bash 3.2, you can simply use it.

# Setup file handles (to keep files open for reading/writing)
exec 5< <( gzip -dc /foo/compressed.gz )
exec 6> >( gzip -c9 /foo/new_compressed.gz)

# Loop over input file (/foo/bar) for matches
read next_match <&5
while read line; do
    if [ "$line" = "$next_match" ]; then
        read next_match <&5
        echo "$line" >&6
    fi

    echo "$line"
done < /foo/bar

# Close file handles
exec <5&- 6>&-

# Overwrite old match file
mv /foo/new_compressed.gz /foo/compressed.gz
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top