Question

I observe the following weird behavior. I have an Azure web role which is deployed on love Azure cloud. Now I click "Configure" in the Azure Management Portal and change the number of instances - the portal shows some "activity". Now I open the browser and navigate to the URL assigned to my deployment and start refreshing the page something like once per two seconds. The page reloads fine many times and then fro some time it will stop reloading - the request will be rejected, then after something like half a minute the requests are handled normally.

What is happening? Is the web server temporarily stopped? How do I change number of instances so that HTTP requests to the role are handled at all times?

Était-ce utile?

La solution

When you change the configuration file, your current instance might be restarted. This might be the reason you met with, which your website didn't response in about 30 seconds.

Please have a look http://msdn.microsoft.com/en-us/library/microsoft.windowsazure.serviceruntime.roleenvironment.changing.aspx and check if it 's because of the role restarting.

Autres conseils

What you are doing is manual. Have you looked at the SDK for autoscaling Azure? http://channel9.msdn.com/posts/Autoscaling-Windows-Azure-applications

Check out the demo at the 18 minute mark. It doesn't answer your question directly, but its a much more configurable/dynamic way of scaling Azure.

Azure updates your roles one update domain at a time, so in theory you should see no downtime when updating the config (provided you have at least two instances). However, if you refresh the browser every couple of seconds, it's possible that your requests go always to the same instance due to keep-alive.

It would be interesting to know what the behavior is if you disable keep-alives for your webrole. Note that this will have a performance impact, so you'll probably want to re-enable keep-alives after the exercise.

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top