Question

I want to create a web service hosted in Windows Azure. The clients will upload files for processing, the cloud will process those files, produce resulting files, the client will download them.

I guess I'll use web roles for handling HTTP requests and worker roles for actual processing and something like Azure Queue or Azure Table Storage for tracking requests. Let's pretend it'll be Azure Table Storage - one "request" record per user uploaded file.

A major design problem is processing a single file can take anywhere from one second to say ten hours.

So I expect the following case: a worker role is started, gets to Azure Table Storage, finds a request marked "ready for processing", marks it "is being processed", starts actual processing. Normally it would process the file and mark the request "processed", but what if it dies unexpectedly?

Unless I take care of it the request will remain in "is being processed" state forever.

How do I track requests that are marked "is being processed" but abandoned? What mechanism in Windows Azure would be most convenient for that?

Was it helpful?

Solution

I believe this problem is non technology specific.
Since your processing jobs are long running, I suggest these jobs should report their progress during execution. In this way a job which has not reported progress for a substantial substantial duration becomes a clear candidate for cleanup and then can be restarted on another worker role.
How you record progress and do job swapping is upto you. One approach is to use database as recording mechanism and creating an agent worker process that pings the job progress table. In case the worker process determines any problems it can take corrective actions.

Other approach would be to associate the worker role identification with the long running process. The worker roles can communicate their health status using some sort of heart beat.
Had the jobs not been long running you could have flagged the start time of job instead on status flag and could have used the timeout mechanism to determine whether the processing has failed.

OTHER TIPS

The main issue you have is that queues cannot set a visibility timeout larger than 2 hrs today. So, you need another mechanism to indicate that active work is in progress. I would suggest a blob lease. For every file you process, you either lease the blob itself or a 0-byte marker blob. Your workers scan the available blobs and attempt to lease them. If they get the lease, it means it is not being processed and they go ahead and process. If they fail the lease, another worker must actively be working on it.

Once the worker has completed processing the file, it simply copies the file into another container in blob storage (or deletes it if you wish) so that it is not scanned again.

Leases are really your only answer here until queue messages can be renewed.

edit: I should clarify that the reason that leases would work here is that a lease must be actively maintained every 30 seconds or so, so you have a very small window where you know if someone has died or is still working on it.

The problem you describe is best handled with Azure Queues, as Azure Table Storage won't give you any type of management mechanism.

Using Azure Queues, you set a timeout when you get an item of the queue (default: 30 seconds). Once you read a queue item (e.g. "process file x waiting for you in blob at url y"), that queue item becomes invisible for the time period specified. This means that other worker role instances won't try to grab it at the same time. Once you complete processing, you simply delete the queue item.

Now: Let's say you're almost done and haven't deleted the queue item yet. All of a sudden, your role instance unexpectedly crashes (or the hardware fails, or you're rebooted for some reason). The queue-item processing code has now stopped. Eventually, when time passes since originally reading the queue item, equivalent to the timeout value you set, that queue item becomes visible again. One of your worker role instances will once again read the queue item and can process it.

A few things to keep in mind:

  • Queue items have a dequeue count. Pay attention to this. Once you hit a certain number of dequeues for a specific queue item (I like to use 3 times as my limit), you should move this queue item to a 'poison queue' or table storage for offline evaluation - there could be something wrong with the message or the process around handling that message.
  • Make sure your processing is idempotent (e.g. you can process the same message multiple times with no side-effects)
  • Because a queue item can go invisible and then return to visibility later, queue items don't necessary get processed in FIFO order.

EDIT: Per Ryan's answer - Azure queue messages max out at a 2-hour timeout. Service Bus queue messages have a far-greater timeout. This feature just went CTP a few days ago.

Your role's OnStop() could be part of the solution, but there are some circumstances (hardware failure) where it won't get called. To cover that case, have your OnStart() mark everything with the same RoleInstanceID as abandoned, because it wouldn't be called if anything was still happening. (You can observe that Azure reuses its role instance IDs, luckily.)

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top