문제

I wrote a Python script that processes CSV files with non-ascii characters, encoded in UTF-8. However the encoding of the output is broken. So, from this in the input:

"d\xc4\x9bjin hornictv\xc3\xad"

I get this in the output:

"d\xe2\x99\xafjin hornictv\xc2\xa9\xc6\xaf"

Can you suggest where the encoding error might come from? Have you seen similar behaviour previously?

EDIT: I'm using csv standard library with the UnicodeWriter class featured in the docs. I use Python version 2.6.6.

EDIT 2: The code to reproduce the behaviour:

#!/usr/bin/env python
#-*- coding:utf-8 -*-

import csv
from pymarc import MARCReader # The pymarc package available PyPI: http://pypi.python.org/pypi/pymarc/2.71
from UnicodeWriter import UnicodeWriter # The UnicodeWriter from: http://docs.python.org/library/csv.html

def getRow(tag, record):
  if record[tag].is_control_field():
    row = [tag, record[tag].value()]
  else:
    row = [tag] + record[tag].subfields
  return row

inputFile = open("input.mrc", "r")
outputFile = open("output.csv", "wb")
reader = MARCReader(inputFile, to_unicode = True)
writer = UnicodeWriter(outputFile, delimiter = ",", quoting = csv.QUOTE_MINIMAL)

for record in reader:
  if bool(record["001"]):
    tags = [field.tag for field in record.get_fields()]
    tags.sort()
    for tag in tags:
      writer.writerow(getRow(tag, record))

inputFile.close()
outputFile.close()

The input data is available here (large file).

도움이 되었습니까?

해결책

It seems adding force_utf8 = True argument to the MARCReader constructor solved the problem:

reader = MARCReader(inputFile, to_unicode = True, force_utf8 = True)

According to the inspection of the source code (via inspect) it does something like:

string.decode("utf-8", "strict")

다른 팁

You can try to open the file with UTF-8 encoding:

import codecs
codecs.open('myfile.txt', encoding='utf8')
라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top