Question

I'm new to Python from the R world, and I'm working on big text files, structured in data columns (this is LiDaR data, so generally 60 million + records).

Is it possible to change the field separator (eg from tab-delimited to comma-delimited) of such a big file without having to read the file and do a for loop on the lines?

Was it helpful?

Solution

No.

  • Read the file in
  • Change separators for each line
  • Write each line back

This is easily doable with just a few lines of Python (not tested but the general approach works):

# Python - it's so readable, the code basically just writes itself ;-)
#
with open('infile') as infile:
  with open('outfile', 'w') as outfile:
    for line in infile:
      fields = line.split('\t')
      outfile.write(','.join(fields))

I'm not familiar with R, but if it has a library function for this it's probably doing exactly the same thing.

Note that this code only reads one line at a time from the file, so the file can be larger than the physical RAM - it's never wholly loaded in.

OTHER TIPS

You can use the linux tr command to replace any character with any other character.

Actually lets say yes, you can do it without loops eg:

with open('in') as infile:
  with open('out', 'w') as outfile:
      map(lambda line: outfile.write(','.join(line.split('\n'))), infile)

You cant, but i strongly advise you to check generators.

Point is that you can make faster and well structured program without need to write and store data in memory in order to process it.

For instance

file = open("bigfile","w")
j = (i.split("\t") for i in file)
s = (","join(i) for i in j)
#and now magic happens
for i in s:
     some_other_file.write(i)

This code spends memory for holding only single line.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top