Perhaps your (shared?) server runs out of resources (memory, number of processes, etc.) when you try to execute all 30+ scripts at the same time.
I would use something like a queue and a scheduler (cron in Linux) instead, see for example this answer. In summary you "check out" a batch of records to be updated and process them sequentially.
It would depend on the resources you have at your disposal, but you could run multiple jobs at the same time, each processing a part of your queue one after another (no need for async exec()
calls).
You should also look into table locking to make sure no records get "checked out" by multiple versions of your script.