Pregunta

I have a bulkified trigger that uses maps in order to avoid running into SOQL governor limits. This trigger also uses static class variables for recursion and for limiting queries.

What I'm doing is when a bulk operation such as an insert or update is started on the object, Contact object in this case, then the trigger will build the maps of related Accounts on the first trigger, and will use those maps for the rest of the trigger firings.

Here is an example of the operation that is working great, but only for After Update & After Insert trigger operations:

  1. Check that a static class variable is not true.

  2. If variable is not true build maps.

  3. Set the static class variable to true.

  4. Perform the trigger operations.

For the insert/update triggers, the session state is maintained and the static class variable is not reset until after the end of the bulk operation.

However for the before delete trigger, it seems there is no session state, and that the session is reset each time a record is deleted. The session is reset, but the governor limits are cumulitive for a bulk delete of records. So with a before delete trigger, even if using maps, the soql query count keeps counting running into the infamous 'Too Many Sql Queries' limit for the deletion of more than 100 records.

Any thoughts on how to prevent running into the SOQL limit would be much much appreciated. I wasn't able to find anything on this anywhere.

¿Fue útil?

Solución

One option you could take is to use the trigger to schedule a batch apex class for execution. For which ever object is the one which kicks off the cascading delete, use the trigger to create an instance of the batch, passing to it a list of source IDs.

Then in the execute method of the batch class you can then build up the maps etc. for each batch and perform the deletes in there. Batch apex has considerably higher governor limits at the sacrifice of synchronous execution, that said the process will generally kick off within a couple of seconds of your operation.

Other than this it may simply be a case of optimising your code such that cascading deletes always work on lists as large as possible (up to the 200 limit of course), or maybe you could use Master Detail relationships to take care of some of the delete operations for you?

Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top