If the data isn't huge, it makes sense to change the order of operations and do the mapping in RAM, i.e.
SELECT * FROM Users
- Insert users in MongoDB
- Add pairs
(SQL id, MongoDB id)
to a hash table SELECT * FROM Surveys
- for each survey, replace
CreateUser
withHashtable[CreateUser]
- Insert Surveys into MongoDB
Typically, this will be quite a bit faster because you don't need to update objects in mongodb and you won't have to query your data twice.
You should try to use batch inserts for MongoDB instead of inserting documents one-by-one. Instead of getting the newly created documents' ids from the database, you can assign the MongoDB primary key yourself. Otherwise the driver will do it anyway, not the database itself, so there's no real advantage in not doing so.
If the amount of data is huge (such that you can't keep the lookup tables in RAM), I'd try to stick to the lookups and process them subset-by-subset. That will be tricky if you have many foreign keys though.