Have you considered using the multiprocessing
module to parallelize processing the files? Assuming that you're actually CPU-bound here (meaning it's the fourier transform that's eating up most of the running time, not reading/writing the files), that should speed up execution time without actually needing to speed up the loop itself.
Edit:
For example, something like this (untested, but should give you the idea):
def do_transformation(filename)
t,f = loadtxt(filename, unpack=True)
dt = t[1]-t[0]
fou = absolute(fft.fft(f))
frq = absolute(fft.fftfreq(len(t),dt))
ymax = median(fou)*30
figure(figsize=(15,7))
plot(frq,fou,'k')
xlim(0,400)
ylim(0,ymax)
iname = filename.replace('.dat','.png')
savefig(iname,dpi=80)
close()
pool = multiprocessing.Pool(multiprocessing.cpu_count())
for filename in filelist:
pool.apply_async(do_transformation, (filename,))
pool.close()
pool.join()
You may need to tweak what work actually gets done in the worker processes. Trying to parallelize the disk I/O portions may not help you much (or even hurt you), for example.