Domanda

I have a tab delimited text file with the following data:

    ahi1
    b/se
ahi 
test    -2.435953
        1.218364
    ahi2
    b/se
ahi 
test    -2.001858
        1.303935

I want to extract the two floating point numbers to a separate csv file with two columns, ie.

-2.435953 1.218264

-2.001858 1.303935

Currently my hack attempt is:

 import csv
 from itertools import islice
 results = csv.reader(open('test', 'r'), delimiter="\n")

 list(islice(results,3))
 print results.next()
 print results.next()
 list(islice(results,3))
 print results.next()
 print results.next()

Which is not ideal. I am a Noob to Python so I apologise in advance and thank you for your time.

È stato utile?

Soluzione

Here is the code to do the job:

import re

# this is the same data just copy/pasted from your question
data = """    ahi1
    b/se
ahi 
test    -2.435953
        1.218364
    ahi2
    b/se
ahi 
test    -2.001858
        1.303935"""

# what we're gonna do, is search through it line-by-line
# and parse out the numbers, using regular expressions

# what this basically does is, look for any number of characters
# that aren't digits or '-' [^-\d]  ^ means NOT
# then look for 0 or 1 dashes ('-') followed by one or more decimals
# and a dot and decimals again: [\-]{0,1}\d+\.\d+
# and then the same as first..
pattern = re.compile(r"[^-\d]*([\-]{0,1}\d+\.\d+)[^-\d]*")

results = []
for line in data.split("\n"):
    match = pattern.match(line)
    if match:
        results.append(match.groups()[0])

pairs = []
i = 0
end = len(results)
while i < end - 1:
    pairs.append((results[i], results[i+1]))
    i += 2

for p in pairs:
    print "%s, %s" % (p[0], p[1])

The output:

>>>
-2.435953, 1.218364
-2.001858, 1.303935

Instead of printing out the numbers, you could save them in a list and zip them together afterwards.. I'm using the python regular expression framework to parse the text. I can only recommend you pick up regular expressions if you don't already know it. I find it very useful to parse through text and all sorts of machine generated output-files.

EDIT:

Oh and BTW, if you're worried about the performance, I tested on my slow old 2ghz IBM T60 laptop and I can parse a megabyte in about 200ms using the regex.

UPDATE: I felt kind, so I did the last step for you :P

Altri suggerimenti

Maybe this can help

zip(*[results]*5)

eg

import csv
from itertools import izip
results = csv.reader(open('test', 'r'), delimiter="\t")
for result1, result2 in (x[3:5] for x in izip(*[results]*5)):
    ... # do something with the result

Tricky enough but more eloquent and sequential solution:

$ grep -v "ahi" myFileName | grep -v se | tr -d "test\" " | awk 'NR%2{printf $0", ";next;}1'
-2.435953, 1.218364
-2.001858, 1.303935

How it works: Basically remove specific text lines, then remove unwanted text in lines, then join every second line with formatting. I just added the comma for beautification purposes. Leave the comma out of awks printf ", " if you don't need it.

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top