Come faccio a rendere la mia applicazione Windows Azure resistente all'applicazione di Azure Datacenter Catastrophic?

StackOverflow https://stackoverflow.com/questions/6057494

  •  15-11-2019
  •  | 
  •  

Domanda

AFAIK AMAZON AWS offre le cosiddette "regioni" e "le zone di disponibilità" per mitigare i rischi di interruzione del data center parziale o completo.Sembra che se avessi copie della mia applicazione in due "regioni" e una "regione" scende la mia applicazione può ancora continuare a lavorare come se non fosse accaduto nulla.

C'è qualcosa del genere con Windows Azure?Come affronto il rischio di interruzione catastrofica del datacenter con Windows Azure?

È stato utile?

Soluzione

Within a single data center, your Windows Azure application has the following benefits:

  • Going beyond one compute instance, your VMs are divided into fault domains, across different physical areas. This way, even if an entire server rack went down, you'd still have compute running somewhere else.
  • With Windows Azure Storage and SQL Azure, storage is triple replicated. This is not eventual replication - when a write call returns, at least one replica has been written to.

Ok, that's the easy stuff. What if a data center disappears? Here are the features that will help you build DR into your application:

  • For SQL Azure, you can set up Data Sync. This facility synchronizes your SQL Azure database with either another SQL Azure database (presumably in another data center), or an on-premises SQL Server database. More info here. Since this feature is still considered a Preview feature, you have to go here to set it up.
  • For Azure storage (tables, blobs), you'll need to handle replication to a second data center, as there is no built-in facility today. This can be done with, say, a background task that pulls data every hour and copies it to a storage account somewhere else. EDIT: Per Ryan's answer, there's data geo-replication for blobs and tables. HOWEVER: Aside from a mention in this blog post in December, and possibly at PDC, this is not live.
  • For Compute availability, you can set up Traffic Manager to load-balance across data centers. This feature is currently in CTP - visit the Beta area of the Windows Azure portal to sign up.

Remember that, with DR, whether in the cloud or on-premises, there are additional costs (such as bandwidth between data centers, storage costs for duplicate data in a secondary data center, and Compute instances in additional data centers). .

Just like with on-premises environments, DR needs to be carefully thought out and implemented.

Altri suggerimenti

David's answer is pretty good, but one piece is incorrect. For Windows Azure blobs and tables, your data is actually geographically replicated today between sub-regions (e.g. North and South US). This is an async process that has a target of about a 10 min lag or so. This process is also out of your control and is purely for a data center loss. In total, your data is replicated 6 times in 2 different data centers when you use Windows Azure blobs and tables (impressive, no?).

If a data center was lost, they would flip over your DNS for blob and table storage to the other sub-region and your account would appear online again. This is true only for blobs and tables (not queues, not SQL Azure, etc).

So, for a true disaster recovery, you could use Data Sync for SQL Azure and Traffic Manager for compute (assuming you run a hot standby in another sub-region). If a datacenter was lost, Traffic Manager would route to the new sub-region and you would find your data there as well.

The one failure that you didn't account for is in the ability for an error to be replicated across data centers. In that scenario, you may want to consider running Azure PAAS as part of HP Cloud offering in either a load balanced or failover scenario.

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top