Question

I have a text file (more correctly, a “German style“ CSV file, i.e. semicolon-separated, decimal comma) which has a date and the value of a measurement on each line.
There are stretches of faulty values which I want to remove before further work. I'd like to store these cuts in some script so that my corrections are documented and I can replay those corrections if necessary.

The lines look like this:

28.01.2005 14:48:38;5,166
28.01.2005 14:50:38;2,916
28.01.2005 14:52:38;0,000
28.01.2005 14:54:38;0,000
(long stretch of values that should be removed; could also be something else beside 0)
01.02.2005 00:11:43;0,000
01.02.2005 00:13:43;1,333
01.02.2005 00:15:43;3,250

Now I'd like to store a list of begin and end patterns like 28.01.2005 14:52:38 + 01.02.2005 00:11:43, and the script would cut the lines matching these begin/end pairs and everything that's between them.

I'm thinking about hacking an awk script, but perhaps I'm missing an already existing tool.

Was it helpful?

Solution

Have a look at sed:

sed '/start_pat/,/end_pat/d'

will delete lines between start_pat and end_pat (inclusive).

To delete multiple such pairs, you can combine them with multiple -e options:

sed -e '/s1/,/e1/d' -e '/s2/,/e2/d' -e '/s3/,/e3/d' ...

OTHER TIPS

Firstly, why do you need to keep a record of what you have done? Why not keep a backup of the original file, or take a diff between the old & new files, or put it under source control?

For the actual changes I suggest using Vim.

The Vim :global command (abbreviated to :g) can be used to run :ex commands on lines that match a regex. This is in many ways more powerful than awk since the commands can then refer to ranges relative to the matching line, plus you have the full text processing power of Vim at your disposal.

For example, this will do something close to what you want (untested, so caveat emptor):

:g!/^\d\d\.\d\d\.\d\d\d\d/ -1 write tmp.txt >> | delete

This matches lines that do NOT start with a date (the ! negates the match), appends the previous line to the file tmp.txt, then deletes the current line.

You will probably end up with duplicate lines in tmp.txt, but they can be removed by running the file through uniq.

you are also use awk

awk '/start/,/end/' file

I would seriously suggest learning the basics of perl (i.e. not the OO stuff). It will repay you in bucket-loads.

It is fast and simple to write a bit of perl to do this (and many other such tasks) once you have grasped the fundamentals, which if you are used to using awk, sed, grep etc are pretty simple.

You won't have to remember how to use lots of different tools and where you would previously have used multiple tools piped together to solve a problem, you can just use a single perl script (usually much faster to execute).

And, perl is installed on virtually every unix/linux distro now.

(that sed is neat though :-)

use grep -L (print none matching lines)

Sorry - thought you just wanted lines without 0,000 at the end

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top