Question

We have a REST API which calls a third-party REST API to Send Emails. The Third Party API is not super reliable and randomly fails every now and then with a 500.

Our Clients do not want to retry at all and instead requested us to build a retry mechanism for failed emails.

We are using Spring-Retry to implement Retry and Circuit Breaker Pattern where in Fallback method we are storing failed request somewhere (DB/File still an open question).

We have a scheduled job that will run every hour, pick up all the failures where initial retries were exhausted and try to re-send emails.

My question is on if there are any best practices on how do we store the failed request:

  1. Shall we store the request as is with Body, URL, and Headers in a blob/text in db so it is easier for the Scheduled Service to Resend it,
  2. Shall we write the failed request to a file somewhere maybe S3 and resend it
  3. Shall we reconstruct the API request from scratch using all the data passed to us by the client and stored in the database already in different tables (acc numbers, usernames, urls) plus fetching API Keys and reconstruction of URLs.

We are leaning towards option 3, there is more development work involved, but we already have all the data stored and can use it to reconstruct whole request. Is there anything I am missing here or any best practices or design pattern I can leverage?

Was it helpful?

Solution

The best way with emails is not to have an API attempt to send them. Sending emails is a slow process and not a suitable task for a website.

Instead have the API persist the send email request to a database, split into its various fields, not as a blob.

Then have a worker process pick up new jobs from the database and attempt to send them. If the send fails, the worker process can automatically pick up the job again on its next run through.

A more advanced setup would replace the database with message queues but it's easier to explain with a database.

You can see how this setup makes it easy to handle the various failure scenarios, you can take all sorts of action including retrying, reporting back to the client after X amount of time, reporting on incorrect email addresses etc etc

OTHER TIPS

Shall we store the request as is with Body, URL, and Headers in a blob/text in db so it is easier for the Scheduled Service to Resend it,

Baaad, bad idea. Consider things like authentication tokens that

a) you will need to store securely and

b) tend to expire.

Then some services (e.g. AWS SES) require you to sign each request so they're only valid in-time, and that data is stored in headers. For any retry to work you'd have to recalculate these anyway.

The problem with option Shall we reconstruct the API request from scratch using all the data passed to us by the client and stored in the database already in different tables (acc numbers, usernames, urls) plus fetching API Keys and reconstruction of URLs. is that in the time between request failed and the retry a lot of that data may've changed.

E.g. (for arguments sake) consider an e-mail for an order placed by Mary Lamb who just happened to have married and changed her name to Mary Smith just before the retry is sent. Now consider the fact that the order e-mails are sent to multiple people, and only ONE of the batch failed. Upon retry, that person will get an e-mail that is for the same order but it would've appeared to be a different e-mail.

I would go with storing the inputs to the models of the body of the failed request (e.g. i imagine you have some 'class' that represents a message to be sent, store properties of that), so in the simplest way possible -> the body of the HTTP request, but not the headers.

Licensed under: CC-BY-SA with attribution
scroll top