This isnt possible inside a single Data Flow. There are various "solutions" around if you google enough but they overlook the architectural reality that rows travel down a Data Flow in buffers/batches, processed in parallel.
So image you have multiple "new" rows arriving in 2 adjacent buffers. There is no way to ensure that your downstream handling of "new" rows from buffer 1 has been completed before buffer 2 hits your upstream lookup. This will result in multiple "new" rows being inserted to your lookup target table for the same key.
You need to have an upstream Data Flow Task that does all the required lookup inserts. This will be a more efficient solution overall at runtime, as your lookup inserts can use Fast Load and Table Lock, and your downstream Lookup can be Full Cache.