You could save loads of energy when it comes to your data-centre cooling, but the Devil is in the detail, reports Wayne Rash.
Your data centre may be the biggest single user of electrical power in your company, but as is always the case, everything depends on your company, and on your data centre. Data centres can use up to half of a company’s power needs in general, though there are exceptions.
Google is almost entirely made up of data centres around the world so the vast majority of the company’s power needs are for electricity. For some manufacturing companies, on the other hand, the data centre is only a tiny percentage of their power needs.
Improving data centre efficiency can mean big improvements on the bottom line. For the companies that provide what are called “Utility Computing Services” in the trade (think Google, Facebook, and some other obvious candidates), data centre efficiency can make or break the company.
But most companies aren’t doing utility computing. They just need to run a data centre and they need to keep the cost within reason. For these companies—probably including yours—improving data centre efficiency can make a significant difference.
Basically, there are two components to the power needed to keep a data centre running. The first is the electrical power actually consumed by the computing and communications infrastructure. The other component is the electrical power it takes to keep all of that equipment cool and to power the electrical delivery system. Each component comprises about half of the total power requirements in most legacy data centres.
For data centre managers who aren’t starting with a clean sheet of paper, the prospect of improving their efficiency may seem overwhelming. “A typical data centre manager is going to have a data center and figure out what to do to make it more efficient,” says Richard Hodges, president of GreenIT, a California, based consultancy. “The basic process starts with first baselining what you have.”
The process consists of answering a series of questions about your data centre. “In answering those questions, you produce a series of processes that may or may not be applicable,” Hodges says. “It’s a very thorough compilation of things that can be done.”
Of course, of all of the possibilities of things that can be done, not all apply to your data centre, and many more may not be practicable. If you’re trying to retrofit an existing data centre it may not be feasible to retrofit everything, but still there is a lot that can be done. According to John Bean, director of innovation for thermal solutions for Schneider Electric’s APC division, some of the easiest solutions can make a big difference. For example, Bean suggests that the first thing you should do is get your white space under control.
“You still see people who violate hot aisle/cold aisle practices,” Bean says. “They’re not putting up barriers to control hot air and cold air.” Once you have that under control, Bean explains, you need to decide whether to use perimeter cooling or in row cooling, where the cooling air is directed to the cold aisles of the data centre.
Hot aisle/cold aisle cooling is a system in which the exhaust side of data centre equipment is sent into a contained space, and then removed from there and sent for cooling. This ensures that equipment isn’t ingesting hot air exhausted by another piece of equipment.
According to Julius Neudorfer, CTO of North American Access Technologies in New York, much of the control of hot and cold air is just common sense. The first steps he suggests should be the easiest, such as installing blanking panels in equipment racks so that hot air doesn’t mix with cold air. “Separating the airflows is a big area,” he says. Neudorfer adds that controlling air under the raised floor in the data centre is an important part of controlling air leakage. This includes closing open spaces in the floor, and controlling places where cables pass through the floor. “You need to use grommets to keep the air in the floor,” Neudorfer says.
One area that Bean mentioned that’s often overlooked is to keep the area under the raised floor clear of junk. Bean has found everything from spools of wire and ladders to boxes of office supplies stored under the raised floor of a data centre. All of these items interfere with data centre airflow and raise the cost of cooling.
There have even been reports of staff keeping their beer supply under the raised floor, which raises a number of problems, one of which is interference with the flow of cold air. (It probably isn’t ideal for the beer either.)
And of course, nearly everyone recommends simply raising the temperature of the data centre. “The gold standard of meat locker temperatures in the data centre has been reviewed,” Neudorfer says. “Standards now allow temperatures up to [27 Celcius]. It could go higher.” The reason is that today’s servers and communications equipment operate perfectly well at very warm temperatures. “The industry as a whole has begun feeling more comfortable with raising the temperature. For every [half degree C] you get 5% to 10% savings,” Neudorfer says.
Bean suggests that ambient outside air is often cool enough. While there are attempts, especially in Europe, to simply open up the windows and let cool air into the data centre, Bean isn’t sure that’s necessarily a good idea. He mentions the danger posed by particulate matter such as smoke, and potentially by events such as chemical spills.
However, he suggested that an air-to-air heat exchanger can keep the data centre air both cool and clean in most areas. “If the air is cool enough you can totally cool the air without running compressors,” Bean says. Refrigeration compressors are a huge component of cooling costs, and anything you can do to cut down on their use saves energy.
A variety of other approaches to cooling your data centre don’t require air conditioning compressors and their heavy energy demands. Bean suggests evaporative cooling and mentions that where wet-bulb temperatures rarely rise above 20 C, such relatively efficient cooling is feasible.
Hodges, meanwhile, suggests that true outside air works fine for some companies because modern servers aren’t nearly as sensitive to pollution and humidity as their predecessors were. He even pointed out Microsoft’s experiment of putting a data centre under a tent to get natural air flow.
Of course, having modern hardware isn’t necessarily a given, even though a change to new servers and power handling equipment will often pay for itself within the economic lifetime of the equipment. Neudorfer pointed out that modern UPS hardware is over 90% efficient, compared with about 60% for similar hardware that’s 10 years old.
So if there’s all of this efficiency available, why aren’t more data centres taking advantage of it? According to Hodges, it’s mostly lack of incentive. “The biggest impediment is that there’s no motivation,” Hodges said. “They don’t pay the bill, and they’re more concerned about uptime.”
While IT managers should be held accountable for their energy usage, Hodges suggests, usually that isn’t possible. “Frequently data centres are just one part of the building,” Hodges explained. “They have no idea how much they’re spending on power now, and they don’t seem to care how much they’ll need if they build what they plan.”
Complicating things, it may not be possible to find out how much energy they’re using. “It’s often included with the rent,” Hodges says, and energy usage isn’t broken out as a separate item. This may change. “Real estate agents are beginning to address this in leasing arrangements,” he adds.
The real problem with improving data centre efficiency isn’t that the means don’t exist, because they do. But as Hodges and others note, it’s hard to get data centre managers to focus on reducing energy use because they lack the necessary information and the necessary motivation.
That may change as data centre energy use is addressed in laws, and as coucils require levels of efficiency as part of the planning process. But mostly, the IT manager has to be given the information that’s needed, and then be held accountable for meeting energy use standards before meaningful changes take place.