Conventional wisdom is that you needed to spread the power and cooling evenly around the server room. Gartner says that's no longer so, as Andy Patrizio discovers...
Data center design and layout has been fairly consistent since we first started building server farms: Spread everything out evenly, use the hot aisle/cold aisle design, and keep the room cold enough to store meat.
But that conventional wisdom is changing, as vendors and their customers alike look for ways to cut the cooling and power bill. It didn't take long for data centre operators to realise that the hardware acquisition costs were nothing compared to what it cost to keep these things running and cooled.
For starters, the emphasis on freezer-like temperatures is starting to thaw, driven by the hardware vendors. Intel posted research back in 2008 that showed you could run a data center just fine using the outside air (PDF), using a desert as its test location.
More conventional wisdom falling on the wayside: how a data center is set up and configured. For the longest time, cabinets were in neat rows, evenly distributed around the room, and the power distribution was between 2 KW and 4 KW per rack.
However, IT researcher Gartner is recommending a change from the even spread, similar to suburban housing design, to something more akin to urban city design, with areas of higher density offset by areas of lower density.
“High-density zones are by far the best way to manage the differences in the life cycle changes of data centers' building structures, electromechanical equipment, and IT equipment,” wrote Rakesh Kumar, research vice president at Gartner, in a statement.
Old Designs Growing Obsolete
Gartner defines a high-density zone as one where the energy needed is more than 10 KW per rack for a given set of rows. This is not hard to do, as a rack that's 60% filled could have a power draw as high as 12 KW. Since blade servers pack a lot of compute power into a small area, they also need a lot of power. Four-socket blades are commonplace, and each socket gets accompanying memory slots. That adds up fast.
With the growing use of high-density blade systems, Gartner says, the old design envelope is no longer sufficient. It's recommending data centres now set up an area of high density computing that run as high as 10 KW. This would be offset by putting in areas of lower power consumption elsewhere within the data centre.
This was practical when most IT equipment used roughly the same amount of energy and needed a similar amount of cooling. But there have been new technologies, such as high density blade servers and network fabric architectures that are creating imbalances of need within the data centre.
Using traditional data centre designs, the floor would have to be designed for the highest common denominator. So the whole raised floor would have to be engineered for the most demanding of workloads, even though only 20% might actually need it. The result is over-engineering and putting a lot of cooling in an area that needs a fraction of it.
Then there's the fact that the current method of cooling – holes in the floor right in front of the cabinet where cool air comes up and is sucked into the bottom of the cabinet – are increasingly ineffective at densities above 15 KW per rack. So a high-density zone requires supplementary cooling, such as a chilled-water system, hot/cold aisle containment, or in-row/in-rack cooling.
And blade server density is only going to skyrocket, according to Lex Coors, a vice president at Interxion, a London-based data center and managed services provider. Even your non-high density servers are going to consume more power in the future, he says.
"We see a typical load of cabinets for newer contracts at seven kilowatts as standard. That is a big change. Only three years ago the standard was 2.5 to 3 kilowatts. IT load is growing. In the coming ten years, the standard IT load will be 15 to 20 kilowatts per rack," Coors says.
So Gartner wants people to change how they design and allocate their data centre. This applies to both new data centres and those more than five years old that are due for retrofits or upgrades.
It recommends people allocate 20% to 30% of the raised floor space of the data centre to a high-density zone and ensure that zone has capacity for five to ten years of growth along with a refrigerant-based cooling system in addition to air cooling.
Gartner also recommends using containment to direct all hot air from the tops of the racks directly up and out of the plenum, so the hot air is not sucked back into the cabinet. Installing them means you close off easy access to the backs of the cabinets, i.e. the "hot aisle," but that's not a place most people want to go very often anyway.
The goal of the high density zone is to create an area that handles high compute loads and also is where you leave yourself room for growth. So a high-density zone is also a way to future-proof against spikes in demand. Gartner recommends the high-density zone be large enough to accommodate predicted IT capacity growth, which should typically be 20 to 25% of the raised floor space.
Will It Work?
Two veteran data centre designers, however, disagree with Gartner's conclusion.
Coors isn’t keen on the idea because he feels that if you build a high-density zone, then that's where all your compute power will be; and if your needs exceed it, the rest of the data centre can't keep up. "If you identify loads to specific areas in the data centre and have sold all of your high density areas, especially as a co-locator, and you still have customers who need compute power, what do you do? It's a missed opportunity," he says.
That scenario may apply to a service provider, but for a private firm building its own data centre for internal use, then it's a little different, he acknowledges. "They will have a forecast of their own use of IT for the next two to three years. So then it becomes more clear that certain areas will not be allocated as high density and low density," he says.
Coors recommends staggering the growth of a large data centre over a two to three year lifespan, building out the power backbone with the capacity for maximum power everywhere at first; but don't run it at its limit. "You should build a backbone that can handle ten megawatts, but don't build a ten megawatt environment today. Just build for your IT load. That will give you the opportunity to use your backbone for other higher densities," he advises.
Sam Fleitman, Chief Operations Officer for Infrastructure-as-a-Service provider SoftLayer, said that for an IaaS firm like his, it made sense to spread out the load and not clump it in one area.
"It depends on your business model and your needs in the data center," he says. "SoftLayer's approach is significantly different, in that we take a very cookie-cutter approach to our facilities and layouts. Every rack is outfitted the same way. With the server fleet we have, we distribute the power load we have."
He adds, "I can see its value if your load is uncertain. If you're not fairly certain of how you are gong to deploy your equipment in a data center, then creating a high density zone is not a bad idea. My personal take is it's not necessary."