Question

I have a ~23000 line SQL dump containing several databases worth of data. I need to extract a certain section of this file (i.e. the data for a single database) and place it in a new file. I know both the start and end line numbers of the data that I want.

Does anyone know a Unix command (or series of commands) to extract all lines from a file between say line 16224 and 16482 and then redirect them into a new file?

Was it helpful?

Solution

sed -n '16224,16482p;16483q' filename > newfile

From the sed manual:

p - Print out the pattern space (to the standard output). This command is usually only used in conjunction with the -n command-line option.

n - If auto-print is not disabled, print the pattern space, then, regardless, replace the pattern space with the next line of input. If there is no more input then sed exits without processing any more commands.

q - Exit sed without processing any more commands or input. Note that the current pattern space is printed if auto-print is not disabled with the -n option.

and

Addresses in a sed script can be in any of the following forms:

number Specifying a line number will match only that line in the input.

An address range can be specified by specifying two addresses separated by a comma (,). An address range matches lines starting from where the first address matches, and continues until the second address matches (inclusively).

OTHER TIPS

sed -n '16224,16482 p' orig-data-file > new-file

Where 16224,16482 are the start line number and end line number, inclusive. This is 1-indexed. -n suppresses echoing the input as output, which you clearly don't want; the numbers indicate the range of lines to make the following command operate on; the command p prints out the relevant lines.

Quite simple using head/tail:

head -16482 in.sql | tail -258 > out.sql

using sed:

sed -n '16482,16482p' in.sql > out.sql

using awk:

awk 'NR>=10&&NR<=20' in.sql > out.sql

You could use 'vi' and then the following command:

:16224,16482w!/tmp/some-file

Alternatively:

cat file | head -n 16482 | tail -n 258

EDIT:- Just to add explanation, you use head -n 16482 to display first 16482 lines then use tail -n 258 to get last 258 lines out of the first output.

There is another approach with awk:

awk 'NR==16224, NR==16482' file

If the file is huge, it can be good to exit after reading the last desired line. This way, it won't read the following lines unnecessarily:

awk 'NR==16224, NR==16482-1; NR==16482 {print; exit}' file
perl -ne 'print if 16224..16482' file.txt > new_file.txt
 # print section of file based on line numbers
 sed -n '16224 ,16482p'               # method 1
 sed '16224,16482!d'                 # method 2

sed -n '16224,16482p' < dump.sql

cat dump.txt | head -16224 | tail -258

should do the trick. The downside of this approach is that you need to do the arithmetic to determine the argument for tail and to account for whether you want the 'between' to include the ending line or not.

Quick and dirty:

head -16428 < file.in | tail -259 > file.out

Probably not the best way to do it but it should work.

BTW: 259 = 16482-16224+1.

I wrote a Haskell program called splitter that does exactly this: have a read through my release blog post.

You can use the program as follows:

$ cat somefile | splitter 16224-16482

And that is all that there is to it. You will need Haskell to install it. Just:

$ cabal install splitter

And you are done. I hope that you find this program useful.

Even we can do this to check at command line:

cat filename|sed 'n1,n2!d' > abc.txt

For Example:

cat foo.pl|sed '100,200!d' > abc.txt

Using ruby:

ruby -ne 'puts "#{$.}: #{$_}" if $. >= 32613500 && $. <= 32614500' < GND.rdf > GND.extract.rdf

Standing on the shoulders of boxxar, I like this:

sed -n '<first line>,$p;<last line>q' input

e.g.

sed -n '16224,$p;16482q' input

The $ means "last line", so the first command makes sed print all lines starting with line 16224 and the second command makes sed quit after printing line 16428. (Adding 1 for the q-range in boxxar's solution does not seem to be necessary.)

I like this variant because I don't need to specify the ending line number twice. And I measured that using $ does not have detrimental effects on performance.

I was about to post the head/tail trick, but actually I'd probably just fire up emacs. ;-)

  1. esc-x goto-line ret 16224
  2. mark (ctrl-space)
  3. esc-x goto-line ret 16482
  4. esc-w

open the new output file, ctl-y save

Let's me see what's happening.

I would use:

awk 'FNR >= 16224 && FNR <= 16482' my_file > extracted.txt

FNR contains the record (line) number of the line being read from the file.

I wrote a small bash script that you can run from your command line, so long as you update your PATH to include its directory (or you can place it in a directory that is already contained in the PATH).

Usage: $ pinch filename start-line end-line

#!/bin/bash
# Display line number ranges of a file to the terminal.
# Usage: $ pinch filename start-line end-line
# By Evan J. Coon

FILENAME=$1
START=$2
END=$3

ERROR="[PINCH ERROR]"

# Check that the number of arguments is 3
if [ $# -lt 3 ]; then
    echo "$ERROR Need three arguments: Filename Start-line End-line"
    exit 1
fi

# Check that the file exists.
if [ ! -f "$FILENAME" ]; then
    echo -e "$ERROR File does not exist. \n\t$FILENAME"
    exit 1
fi

# Check that start-line is not greater than end-line
if [ "$START" -gt "$END" ]; then
    echo -e "$ERROR Start line is greater than End line."
    exit 1
fi

# Check that start-line is positive.
if [ "$START" -lt 0 ]; then
    echo -e "$ERROR Start line is less than 0."
    exit 1
fi

# Check that end-line is positive.
if [ "$END" -lt 0 ]; then
    echo -e "$ERROR End line is less than 0."
    exit 1
fi

NUMOFLINES=$(wc -l < "$FILENAME")

# Check that end-line is not greater than the number of lines in the file.
if [ "$END" -gt "$NUMOFLINES" ]; then
    echo -e "$ERROR End line is greater than number of lines in file."
    exit 1
fi

# The distance from the end of the file to end-line
ENDDIFF=$(( NUMOFLINES - END ))

# For larger files, this will run more quickly. If the distance from the
# end of the file to the end-line is less than the distance from the
# start of the file to the start-line, then start pinching from the
# bottom as opposed to the top.
if [ "$START" -lt "$ENDDIFF" ]; then
    < "$FILENAME" head -n $END | tail -n +$START
else
    < "$FILENAME" tail -n +$START | head -n $(( END-START+1 ))
fi

# Success
exit 0

This might work for you (GNU sed):

sed -ne '16224,16482w newfile' -e '16482q' file

or taking advantage of bash:

sed -n $'16224,16482w newfile\n16482q' file

I wanted to do the same thing from a script using a variable and achieved it by putting quotes around the $variable to separate the variable name from the p:

sed -n "$first","$count"p imagelist.txt >"$imageblock"

I wanted to split a list into separate folders and found the initial question and answer a useful step. (split command not an option on the old os I have to port code to).

The -n in the accept answers work. Here's another way in case you're inclined.

cat $filename | sed "${linenum}p;d";

This does the following:

  1. pipe in the contents of a file (or feed in the text however you want).
  2. sed selects the given line, prints it
  3. d is required to delete lines, otherwise sed will assume all lines will eventually be printed. i.e., without the d, you will get all lines printed by the selected line printed twice because you have the ${linenum}p part asking for it to be printed. I'm pretty sure the -n is basically doing the same thing as the d here.

Since we are talking about extracting lines of text from a text file, I will give an special case where you want to extract all lines that match a certain pattern.

myfile content:
=====================
line1 not needed
line2 also discarded
[Data]
first data line
second data line
=====================
sed -n '/Data/,$p' myfile

Will print the [Data] line and the remaining. If you want the text from line1 to the pattern, you type: sed -n '1,/Data/p' myfile. Furthermore, if you know two pattern (better be unique in your text), both the beginning and end line of the range can be specified with matches.

sed -n '/BEGIN_MARK/,/END_MARK/p' myfile

I think this might be useful solution. If the table name is "person" you can use sed to get all the lines you need to restore your table.

sed -n -e '/DROP TABLE IF EXISTS.*`person `/,/UNLOCK TABLES/p' data.sql  > new_data.sql

Based on this answer, where it is missing the "DROP TABLE IF EXIST" for the table you are restoring and you need to delete few lines from the bottom of the new file before using it to prevent deleting the next table.

Detailed information can also be found here

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top