If you make your image warping filter a subclass of GPUImageTwoInputFilter, this synchronization is taken care of for you.
Target the GPUImageVideoCamera instance to your feature matching / transformation estimation filter and the image warping filter, then target your feature matching / transformation estimation filter to the image warping filter. This will cause your video input to come in via the first input image and the results of your feature matching and transformation estimation filter to be in the second image. GPUImageTwoInputFilter subclasses only process and output a frame once input frames have been provided to both their inputs.
This should give you the synchronization you want, and be pretty straightforward to set up.