Question

We have a 42U rack which is getting a load of new 1U and 2U servers real soon. One of the guys here reckons that you need to leave a gap between the servers (of 1U) to aid cooling.

Question is, do you? When looking around the datacenter, no-one else seems to be, and it also diminishes how much we can fit in. We're using Dell 1850 and 2950 hardware.

Was it helpful?

Solution

Simply NO, the servers and switches, and KVMs, and PSUs are all designed to be on the rack stacked on top of eachother. I'm basing this on a few years building, COs and Data centers for AT&T.

OTHER TIPS

You don't need to leave a gap between systems for gear designed to be rack-mountable. If you were building the systems yourself you'd need to select components carefully: some CPU+motherboards run too hot even if they can physically fit inside a 1U case.

Dell gear will be fine.

You do need to keep the space between and behind the racks clear of clutter. Most servers today channel their airflow front to back, if you don't leave enough open air behind the rack it will get very hot back there and reduce the cooling capacity.

On a typical 48 port switch the front panel is covered with RJ-45 connectors and the back by redundant power connections, PoE power tray hookups, stacking ports and uplinks. Many 1U network switches route their airflow side-to-side, because they can't get enough air through the maze of connectors front-to-back. So you also need to make sure the channels beside the rack are relatively open, to let the switches get enough airflow.

In a crowded server rack, tidiness is important.

I agree with Unkwntech that gaps are not normally required, but I think there are two things to watch out for:

1) Equipment that is not as deep as the rest may have trouble ventilating if mounted below deeper equipment (see below). This is of course less of a concern in a well ventilated server room.

TOP OF RACK
===============
===============
===============
===============
===============
======== (Shallow equipment, trapped hot air)

2) When mounting equipment in a cabinet, you usually need to leave a few inches clear at the top to allow proper ventilation.

Generally, no. That's kind of the whole point a of 1U server: if it needed extra space (even for cooling) they'd give it a bigger chassis and call it 2U. In some designs, where the airflow is controlled and only the rack is supposed to be cooled, the gap is even counter-productive, as it allows for the warm air from the back to flow and mix with the cool air in the front, reducing cooling efficiency. Even when you have gaps for logical groupings, you're supposed to plug them with blank panels to the control the airflow.

Unfortunately, in practice some whitebox vendors occasionally push too hard for that 1U designation, and you'll find that if you stack too many too close together without the occasional gap for airflow you have issues. This isn't a problem with good quality servers and an adequate cooling design, but the bottom end of the market might surprise you.

The last two places I worked have large datacenters and they stack all their servers and appliances with no gaps. The servers have plenty of cooling with their internal fans. It is also recommended to run the rack on a raised floor with perforated tiles in the front of the rack and A/C air return above the rear of the racks for circulation.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top