Question

Is there any bash command that will let you get the nth line of STDOUT?

That is to say, something that would take this

$ ls -l
-rw-r--r--@ 1 root  wheel my.txt
-rw-r--r--@ 1 root  wheel files.txt
-rw-r--r--@ 1 root  wheel here.txt

and do something like

$ ls -l | magic-command 2
-rw-r--r--@ 1 root  wheel files.txt

I realize this would be bad practice when writing scripts meant to be reused, BUT when working with the shell day to day it'd be useful to me to be able to filter my STDOUT in such a way.

I also realize this would be semi-trivial command to write (buffer STDOUT, return a specific line), but I want to know if there's some standard shell command to do this that would be available without me dropping a script into place.

Was it helpful?

Solution

Using sed, just for variety:

ls -l | sed -n 2p

Using this alternative, which looks more efficient since it stops reading the input when the required line is printed, may generate a SIGPIPE in the feeding process, which may in turn generate an unwanted error message:

ls -l | sed -n -e '2{p;q}'

I've seen that often enough that I usually use the first (which is easier to type, anyway), though ls is not a command that complains when it gets SIGPIPE.

For a range of lines:

ls -l | sed -n 2,4p

For several ranges of lines:

ls -l | sed -n -e 2,4p -e 20,30p
ls -l | sed -n -e '2,4p;20,30p'

OTHER TIPS

ls -l | head -2 | tail -1

Alternative to the nice head / tail way:

ls -al | awk 'NR==2'

or

ls -al | sed -n '2p'

From sed1line:

# print line number 52
sed -n '52p'                 # method 1
sed '52!d'                   # method 2
sed '52q;d'                  # method 3, efficient on large files

From awk1line:

# print line number 52
awk 'NR==52'
awk 'NR==52 {print;exit}'          # more efficient on large files

For the sake of completeness ;-)

shorter code

find / | awk NR==3

shorter life

find / | awk 'NR==3 {print $0; exit}'

Try this sed version:

ls -l | sed '2 ! d'

It says "delete all the lines that aren't the second one".

You can use awk:

ls -l | awk 'NR==2'

Update

The above code will not get what we want because of off-by-one error: the ls -l command's first line is the total line. For that, the following revised code will work:

ls -l | awk 'NR==3'

Is Perl easily available to you?

$ perl -n -e 'if ($. == 7) { print; exit(0); }'

Obviously substitute whatever number you want for 7.

Another poster suggested

ls -l | head -2 | tail -1

but if you pipe head into tail, it looks like everything up to line N is processed twice.

Piping tail into head

ls -l | tail -n +2 | head -n1

would be more efficient?

Yes, the most efficient way (as already pointed out by Jonathan Leffler) is to use sed with print & quit:

set -o pipefail                        # cf. help set
time -p ls -l | sed -n -e '2{p;q;}'    # only print the second line & quit (on Mac OS X)
echo "$?: ${PIPESTATUS[*]}"            # cf. man bash | less -p 'PIPESTATUS'

Hmm

sed did not work in my case. I propose:

for "odd" lines 1,3,5,7... ls |awk '0 == (NR+1) % 2'

for "even" lines 2,4,6,8 ls |awk '0 == (NR) % 2'

For more completeness..

ls -l | (for ((x=0;x<2;x++)) ; do read ; done ; head -n1)

Throw away lines until you get to the second, then print out the first line after that. So, it prints the 3rd line.

If it's just the second line..

ls -l | (read; head -n1)

Put as many 'read's as necessary.

I added this in my .bash_profile

function gdf(){
  DELETE_FILE_PATH=$(git log --diff-filter=D --summary | grep delete | grep $1 | awk '{print $4}')
  SHA=$(git log --all -- $DELETE_FILE_PATH | grep commit | head -n 1 | awk '{print $2}')
  git show $SHA -- $DELETE_FILE_PATH
}

And it's works

gdf <file name>
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top