I think you will find it more difficult than you might think (not impossible) to effectively filter images piecemeal by splitting - although it is not an uncommon approach - because you must take care of the intersections of the tiles that make up the split image.
As (I think) you stated in your question, most spatial filters (i.e. those based on convolution with some kernel) adopt a heuristic approach at the edges of the image, where the kernel would overlap "outside" of the image border. The unknown pixels values may be ignored, assumed zero, reflected back from the image, etc. In your case, these effects will occur in the splits between image tiles where you do in fact have the "missing" pixels. This can be overcome by making sure the tiles overlap each other so that each tile can include pixels from its neighbours when applying the filter. The same will apply in the Fourier domain too - and you also need to make sure that your tiles are large enough to contain all the frequencies you wish to retain.
A second point - you say that, for blurring, "Mathematically, I think we would lose some of the lower frequencies" - but actually you retain the low frequencies and lose the high frequencies, i.e. in the Fourier domain blurring is a low-pass filter.
If you are splitting into small enough tiles, it will probably be faster for most filtering operations to stay in the spatial domain. The cost of the FFT's will wipe out any saving you could get from multithreading. Of course, for some Fourier filtering - like deconvolution - you must use the whole image.