Question

I am having some trouble getting general files to read into a program that I have made. The problem I am currently having is that pdfs are based in some kind of mutated utf-8 including a BOM that throws a wrench into my entire operation. Within my application I am using the Snowball stemming algorithm which requires ascii input. There are a number of topics involving getting errors to resolve towards utf-8, however none of them involve sending them into the Snowball algorithm, or consider the fact that ascii is what I want the end result to be. Currently the file I am using is a notepad file using a standard ANSI encoding. The specific error message I get is this:

File "C:\Users\svictoroff\Desktop\Alleyoop\Python_Scripts\Keywords.py", line 38, in Map_Sentence_To_Keywords
    Word = Word.encode('ascii', 'ignore')
UnicodeDecodeError: 'ascii' codec can't decode byte 0x96 in position 0: ordinal not in range(128)

My understanding was that within python, including the ignore argument would simply pass over any non-ascii characters encountered and in this way I would bypass any BOM or special characters, but clearly this is not the case. The actual code called is here:

def Map_Sentence_To_Keywords(Sentence, Keywords):
    '''Takes in a sentence and a list of Keywords, returns a tuple where the
    first element is the sentence, and the second element is a set of
    all keywords appearing in the sentence. Uses Snowball algorithm'''
    Equivalence = stem.SnowballStemmer('english')
    Found = []
    Sentence = re.sub(r'^(\W*?)(.*)(\n?)$', r'\2', Sentence)
    Words = Sentence.split()
    for Word in Words:
        Word = Word.lower().strip()
        Word = Word.encode('ascii', 'ignore')
        Word = Equivalence.stem(Word)
        Found.append(Word)
    return (Sentence, Found)

By including the general non-greedy non-character regex removal to the front of the string I also hoped that trouble characters would be removed, but again this is not the case. I have attempted a number of other encodings besides ascii, and a strict base64 encoding works, but is highly non-ideal for my applications. Any ideas on how to fix this in an automated way?

Initial decoding of Element is failing, but returns a unicode error when actually passed to the encoder.

for Element in Curriculum_Elements:
        try:
            Element = Element.decode('utf-8-sig')
        except:
            print Element 
        Curriculum_Tuples.append(Map_Sentence_To_Keywords(Element, Keywords))

def scraping(File):
    '''Takes in txt file of curriculum, removes all newlines and returns that occur \
    after a lowercase character, then splits at all remaining newlines'''
    Curriculum_Elements = []
    Document = open(File, 'rb').read()
    Document = re.sub(r'(?<=[a-zA-Z,])\r?\n', ' ', Document)
    Curriculum_Elements = Document.split('\r\n')
    return Curriculum_Elements

Shown code generates the curriculum elements seen.

 for Element in Curriculum_Elements:
        try:
            Element = unicode(Element, 'utf-8-sig', 'ignore')
        except:
            print Element 

This typecasting hackaround actually works, but then the conversion back to ascii is a bit wonky. Returns this error:

Warning (from warnings module):
  File "C:\Python27\lib\encodings\utf_8_sig.py", line 19
    if input[:3] == codecs.BOM_UTF8:
UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal
Was it helpful?

Solution

Try decoding the UTF-8 input into a unicode string first, then encode that into ASCII (ignoring non-ASCII). It really doesn't make sense to encode a string that's already encoded.

input = file.read()   # Replace with your file input code...
input = input.decode('utf-8-sig')   # '-sig' handles BOM

# Now isinstance(input, unicode) is True

# ...
Sentence = Sentence.encode('ascii', 'ignore')

After the edits, I see that you were already attempting to decode the strings before encoding them in ASCII. But, it seems the decoding was happening too late, after the file's contents had already been manipulated. This can cause problems since not every UTF-8 byte is a character (some characters take several bytes to encode). Imagine an encoding that transforms any string to a sequence of as and bs. You wouldn't want to manipulate it before decoding it, because you'd see as and bs everywhere even if there weren't any in the unencoded string -- the same problem arises with UTF-8, albeit much more subtly because most bytes really are characters.

So, decode once, before you do anything else:

def scraping(File):
    '''Takes in txt file of curriculum, removes all newlines and returns that occur \
    after a lowercase character, then splits at all remaining newlines'''
    Curriculum_Elements = []
    Document = open(File, 'rb').read().decode('utf-8-sig')
    Document = re.sub(r'(?<=[a-zA-Z,])\r?\n', ' ', Document)
    Curriculum_Elements = Document.split('\r\n')
    return Curriculum_Elements

# ...

for Element in Curriculum_Elements:
    Curriculum_Tuples.append(Map_Sentence_To_Keywords(Element, Keywords))

Your original Map_Sentence_To_Keywords function should work without modification, though I would suggest encoding to ASCII before splitting, just to improve efficiency/readability.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top