Question

I understand that to bring vast scalability and reliability, SQS does extensive parallelization of resources. It uses redundant servers for even small queues and even the messages posted to the queues are stored redundantly as multiple copies. These are the factors which prevent it from exactly-once-delivery like in RabbitMQ. I have seen even deleted messages being delivered.

The implications for the developers is that they need to be prepared for multiple delivery of messages. Amazon claims it not to be a problem, but it it is, then the developer must use some synchronization construct like a database-transaction-lock or dynamo-db conditional write. both of these reduce scalability.

Question is,

In light of the duplicate delivery problem, how the message-invisible-period feature holds? The message is not guaranteed to be invisible. If the developer has to make own arrangements for synchronization, what benefit is of the invisibility-period. I have seen messages re-delivered even when they were supposed to be invisible.

Edit

here i include some references

  1. What is a good practice to achieve the "Exactly-once delivery" behavior with Amazon SQS?
  2. http://aws.amazon.com/sqs/faqs/#How_many_times_will_I_receive_each_message
  3. http://aws.amazon.com/sqs/faqs/#How_does_Amazon_SQS_allow_multiple_readers_to_access_the_same_message_queue_without_losing_messages_or_processing_them_many_times
  4. http://aws.amazon.com/sqs/faqs/#Can_a_deleted_message_be_received_again
Était-ce utile?

La solution

Message invisibility solves a different problem to guaranteeing one and only one delivery. Consider a long running operation on an item in the queue. If the processor craps out during the operation, you don't want to delete the message, you want it to reappear and be handled again by a different processor.

So the pattern is...

  1. Write (push) item into queue
  2. View (peek) item in queue
  3. Mark item invisible
  4. Execute process on item
  5. Write results
  6. Delete (pop) item from queue

So whether you get duplicate delivery or not, you still need to ensure that you process the item in the queue. If you delete it on pulling it off the queue, and then your server dies, you may lose that message forever. It enables aggressive scaling through the use of spot instances - and guarantees (using the above pattern), that you won't lose a message.

But - it doesn't guarantee once and only once delivery. But I don't think it's designed for that problem. I also don't think it's an insurmountable problem. In our case (and I can see why I've never noticed the issues before) - we're writing results to S3. It's no big deal if it overwrites the same file with the same data. Of course if it's a debit transaction going to a bank a/c, you'd probably want some sort of correlation ID... and most systems already have those in there. So if you get a duplicate correlation value, you throw an exception and move on.

Good question. Highlighted something for me.

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top