سؤال

تحديث 2009-05-21

لقد تم اختبار الطريقة رقم 2 لاستخدام مشاركة شبكة واحدة. إنه ينتج عنه بعض المشكلات مع Windows Server 2003 تحت التحميل:

http://support.microsoft.com/kb/810886

تحديث النهاية

لقد تلقيت اقتراحًا لموقع ASP.NET يعمل على النحو التالي:

أجهزة تحميل -ترال -> 4 خوادم الويب IIS6 -> SQL Server DB مع مجموعة الفشل

ها هي المشكلة ...

نحن نختار مكان تخزين ملفات الويب (ASPX ، HTML ، CSS ، الصور). تم اقتراح خيارين:

1) إنشاء نسخ متطابقة من ملفات الويب على كل من خوادم IIS الأربعة.

2) ضع نسخة واحدة من ملفات الويب على مشاركة شبكة يمكن الوصول إليها بواسطة خوادم الويب الأربعة. سيتم تعيين WebRoots على خوادم IIS 4 إلى مشاركة الشبكة الفردية.

ما هو الحل الأفضل؟ من الواضح أن الخيار 2 أبسط بالنسبة للعمليات النشر لأنه يتطلب نسخ الملفات إلى موقع واحد فقط. ومع ذلك ، أتساءل عما إذا كانت هناك مشكلات قابلية التوسع لأن أربعة خوادم ويب تصل جميعًا إلى مجموعة واحدة من الملفات. هل سيفعل IIS هذه الملفات محليًا؟ هل ستضغط على مشاركة الشبكة على كل طلب عميل؟ أيضًا ، هل سيكون الوصول إلى مشاركة الشبكة أبطأ دائمًا من الحصول على ملف على محرك أقراص ثابت محلي؟ هل يصبح الحمل على حصة الشبكة أسوأ بكثير إذا تمت إضافة المزيد من خوادم IIS؟

لإعطاء منظور ، هذا لموقع ويب يتلقى حاليًا حوالي 20 مليون زيارة شهريًا. في الذروة الأخيرة ، كان يتلقى حوالي 200 زيارة في الثانية.

واسمحوا لي أن أعرف إذا كان لديك تجربة خاصة مع مثل هذا الإعداد. شكرا على الادخال.

تحديث 2009-03-05

لتوضيح وضعي - "عمليات النشر" في هذا النظام أكثر تواتراً بكثير من تطبيق الويب النموذجي. موقع الويب هو الواجهة الأمامية لـ CMS المكتب الخلفي. في كل مرة يتم نشر المحتوى في CMS ، يتم دفع صفحات جديدة (ASPX ، HTML ، إلخ) تلقائيًا إلى الموقع المباشر. عمليات النشر هي أساسا "عند الطلب". من الناحية النظرية ، يمكن أن تحدث هذه الدفعة عدة مرات في غضون دقيقة أو أكثر. لذلك لست متأكدًا من أنه سيكون من العملي نشر خادم ويب واحد في الوقت المناسب. أفكار؟

هل كانت مفيدة؟

المحلول

كنت أشارك الحمل بين الخوادم الأربعة. إنه ليس كثيرًا.

أنت لا تريد هذه النقطة الواحدة للخلاف إما عند النشر ولا نقطة الفشل الفردية في الإنتاج.

عند النشر ، يمكنك القيام بها 1 في وقت واحد. يجب أن تقوم أدوات النشر بأتمتة هذا من خلال إخطار موازن التحميل بأنه لا ينبغي استخدام الخادم ، ونشر الكود ، وأي أعمال ما قبل الإلغاء المطلوبة ، وأخيراً إخطار موازن التحميل بأن الخادم جاهز.

استخدمنا هذه الاستراتيجية في مزرعة خادم ويب 200+ وعملت بشكل جيد للنشر دون انقطاع الخدمة.

نصائح أخرى

إذا كان الشاغل الرئيسي هو الأداء ، وهو ما أفترض أنه نظرًا لأنك تنفق كل هذه الأموال على الأجهزة ، فليس من المنطقي حقًا مشاركة نظام ملفات الشبكة فقط من أجل الراحة. حتى إذا كانت محركات أقراص الشبكة عالية الأداء ، فلن تؤدي بالإضافة إلى محركات الأقراص الأصلية.

إن نشر أصول الويب الخاصة بك مؤتمتة على أي حال (أليس كذلك؟) ، لذا فإن القيام بذلك في مضاعفات ليس حقًا من الإزعاج.

إذا كان الأمر أكثر تعقيدًا مما تتركه ، فربما يكون هناك شيء مثل Deltacopy مفيدًا للحفاظ على هذه الأقراص متزامنة.

أحد الأسباب التي تجعل المشاركة المركزية سيئة هو أنها تجعل NIC على خادم المشاركة عنق الزجاجة للمزرعة بأكملها وتخلق نقطة فشل واحدة.

With IIS6 and 7, the scenario of using a network single share across N attached web/app server machines is explicitly supported. MS did a ton of perf testing to make sure this scenario works well. Yes, caching is used. With a dual-NIC server, one for the public internet and one for the private network, you'll get really good performance. The deployment is bulletproof.

It's worth taking the time to benchmark it.

You can also evaluate a ASP.NET Virtual Path Provider, which would allow you to deploy a single ZIP file for the entire app. Or, with a CMS, you could serve content right out of a content database, rather than a filesystem. This presents some really nice options for versioning.

VPP For ZIP via #ZipLib.

VPP for ZIP via DotNetZip.

In an ideal high-availability situation, there should be no single point of failure.

That means a single box with the web pages on it is a no-no. Having done HA work for a major Telco, I would initially propose the following:

  • Each of the four servers has it's own copy of the data.
  • At a quiet time, bring two of the servers off-line (i.e., modify the HA balancer to remove them).
  • Update the two off-line servers.
  • Modify the HA balancer to start using the two new servers and not the two old servers.
  • Test that to ensure correctness.
  • Update the two other servers then bring them online.

That's how you can do it without extra hardware. In the anal-retentive world of the Telco I worked for, here's what we would have done:

  • We would have had eight servers (at the time, we had more money than you could poke a stick at). When the time came for transition, the four offline servers would be set up with the new data.
  • Then the HA balancer would be modified to use the four new servers and stop using the old servers. This made switchover (and, more importantly, switchback if we stuffed up) a very fast and painless process.
  • Only when the new servers had been running for a while would we consider the next switchover. Up until that point, the four old servers were kept off-line but ready, just in case.
  • To get the same effect with less financial outlay, you could have extra disks rather than whole extra servers. Recovery wouldn't be quite as quick since you'd have to power down a server to put the old disk back in, but it would still be faster than a restore operation.

I was in charge of development for a game website that had 60 million hits a month. The way we did it was option #1. User did have the ability to upload images and such and those were put on a NAS that was shared between the servers. It worked out pretty well. I'm assuming that you are also doing page caching and so on, on the application side of the house. I would also deploy on demand, the new pages to all servers simultaneously.

What you gain on NLB with the 4IIS you loose it with the BottleNeck with the app server.

For scalability I'll recommend the applications on the front end web servers.

Here in my company we are implementing that solution. The .NET app in the front ends and an APP server for Sharepoint + a SQL 2008 Cluster.

Hope it helps!

regards!

We have a similar situation to you and our solution is to use a publisher/subscriber model. Our CMS app stores the actual files in a database and notifies a publishing service when a file has been created or updated. This publisher then notifies all the subscribing web applications and they then go and get the file from the database and place it on their file systems.

We have the subscribers set in a config file on the publisher but you could go the whole hog and have the web app do the subscription itself on app startup to make it even easier to manage.

You could use a UNC for the storage, we chose a DB for convenience and portability between or production and test environments (we simply copy the DB back and we have all the live site files as well as the data).

A very simple method of deploying to multiple servers (once the nodes are set up correctly) is to use robocopy.

Preferably you'd have a small staging server for testing and then you'd 'robocopy' to all deployment servers (instead of using a network share).

robocopy is included in the MS ResourceKit - use it with the /MIR switch.

To give you some food for thought you could look at something like Microsoft's Live Mesh . I'm not saying it's the answer for you but the storage model it uses may be.

With the Mesh you download a small Windows Service onto each Windows machine you want in your Mesh and then nominate folders on your system that are part of the mesh. When you copy a file into a Live Mesh folder - which is the exact same operation as copying to any other foler on your system - the service takes care of syncing that file to all your other participating devices.

As an example I keep all my code source files in a Mesh folder and have them synced between work and home. I don't have to do anything at all to keep them in sync the action of saving a file in VS.Net, notepad or any other app initiates the update.

If you have a web site with frequently changing files that need to go to multiple servers, and presumably mutliple authors for those changes, then you could put the Mesh service on each web server and as authors added, changed or removed files the updates would be pushed automatically. As far as the authors go they would just be saving their files to a normal old folder on their computer.

Use a deployment tool, with a process that deploys one at a time and the rest of the system keeps working (as Mufaka said). This is a tried process that will work with both content files and any compiled piece of the application (which deploy causes a recycle of the asp.net process).

Regarding the rate of updates this is something you can control. Have the updates go through a queue, and have a single deployment process that controls when to deploy each item. Notice this doesn't mean you process each update separately, as you can grab the current updates in the queue and deploy them together. Further updates will arrive to the queue, and will be picked up once the current set of updates is over.

Update: About the questions in the comment. This is a custom solution based on my experience with heavy/long processes which needs their rate of updates controlled. I haven't had the need to use this approach for deployment scenarios, as for such dynamic content I usually go with a combination of DB and cache at different levels.

The queue doesn't need to hold the full information, it just need to have the appropriate info (ids/paths) that will let your process pass the info to start the publishing process with an external tool. As it is custom code, you can have it join the information to be published, so you don't have to deal with that in the publishing process/tool.

The DB changes would be done during the publishing process, again you just need to know where the info for the required changes is and let the publishing process/tool handle it. Regarding what to use for the queue, the main ones I have used is msmq and a custom implementation with info in sql server. The queue is just there to control the rate of the updates, so you don't need anything specially targeted at deployments.

Update 2: make sure your DB changes are backwards compatible. This is really important, when you are pushing changes live to different servers.

Assuming your IIS servers are running Windows Server 2003 R2 or better, definitely look into DFS Replication. Each server has it's own copy of the files which eliminates a shared network bottleneck like many others have warned against. Deployment is as simple as copying your changes to any one of the servers in the replication group (assuming a full mesh topology). Replication takes care of the rest automatically including using remote differential compression to only send the deltas of files that have changed.

We're pretty happy using 4 web servers each with a local copy of the pages and a SQL Server with a fail over cluster.

مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى StackOverflow
scroll top