It is a little dirty secret, but very few applications are able to really change a file "in place". Most of the time it looks like the application is modifying the file, but under the hood the edited file is written to a temporary location and then moved over to replace the original.
If you think about it, when you are inserting a few bytes in the middle of the file, you have to rewrite the whole file from this point, anyway.
Since the ascii output tends to be smaller then the unicode input, you probably can pull off something like this (unix only, I guess):
#!/usr/bin/env python
import os
from unidecode import unidecode
def unidecode_file(filename):
# open for write without truncating
fd = os.open(filename, os.O_WRONLY)
pos = 0 # keep track of file length
# open for read
with open(filename) as input:
for line in input:
ascii = unidecode(line.decode('utf-8'))
pos += len(ascii)
os.write(fd, ascii)
os.ftruncate(fd, pos) # truncate output
os.close(fd) # that is all, folks
if __name__ == '__main__':
unidecode_file('somefile.txt')
This stunt is not safe and is not the canonical way to edit a file (you will sure run into trouble if the output is bigger than input). Use the tempfile approach suggested by Drew, but ensure filename uniqueness (the most secure way is to generate a random filename for temporary files).