Here are some great ideas to save you money and reduce your CO2 emissions. Reducing your electricity bill by improving your energy efficiency has the added benefit of being greener.
Similarly, paying attention to the capital cost of data center equipment — both the servers themselves and the cooling plant — can reduce the amount of “embodied” energy used. The energy used to run a data center is only part of the story; it’s also important to consider the energy that was already used to manufacture the equipment.
When Google designed its new $300 million Hamina datacenter, in the south-east of Finland, it discovered a natural resource for equipment cooling, right on its doorstep. Cold seawater.
Hamina is on the coast of the Gulf of Finland, about 100 miles east of Helsinki. Google realized it could implement an idea that’s sometimes called seawater air conditioning (SWAC) or deep-water source cooling. According to Joe Kava, Google’s senior director of datacenter construction and operations, this has never been successfully done at the scale that Google’s using it.
The idea is to draw in cold water from about 25 feet below the surface, which remains at just a few degrees above freezing all year round. The water is passed through a heat exchanger to cool the air, which then extract waste heat from the server racks in the conventional way.
A heat exchanger in the Hamina datacenter (source: Google)
Sounds simple, but there are the inevitable, Devilish details:
Seawater contains foreign bodies, including sand, driftwood, and… well… critters. Filtration is needed, so you don’t clog up your heat exchangers. Keeping the filters clean is of course very important; your water flow rate drops with a clogged filter. Google uses four levels of filtration, from coarse to fine. Each filter stage and the heat exchangers themselves are designed so that they can be cleaned in-place, by temporarily isolating one redundant subsystem and flushing it with a cleaning solution.
Seawater promotes corrosion, because it contains dissolved salts. Conventional heat exchangers won’t last long filled with salt water, unless they’re carefully designed. Google used materials such as titanium for the exchangers and fiberglass for the pipework. Use of fiberglass pipe also requires pressure regulation devices, in case of pump failure, because of the fluid hammer effect. (Note also that Gulf of Finland is part of the Baltic “Sea,” which is actually more like a huge, brackish lake, or enormous estuary. The salt content of the water is less than the neighboring North Sea, into which it flows.)
Cooling the heated water is important, because returning heated water to the sea would be ecologically unsound. Without cooling the returned water to a temperature close to the original, you’d be replacing one eco problem — CO2 emissions — with another — damage to the aquatic ecosystem. Google solves this problem by only using a small proportion of the inlet water for the heat exchangers; the bulk of the water goes to cooling the heated outlet water. The hot and cold water is mixed in a process that Google confusingly calls “tempering.” The mixed water is now only a little warmer than the original, and can be safely returned to the sea.
The infrastructure is expensive, because it requires engineering large inlet tunnels more than 25 feet below ground, in order to reach deep water, which is at a stable, cold temperature. Building this from scratch would be a major capital investment, which may not have offered a return in lower energy usage. Similarly, the embodied energy used in construction may not be less that the energy saved. The risk is: You may end up emitting more CO2 by doing this. In Hamina, Google built the datacenter on the site of an old paper mill, which already had suitable water tunnels, because large amounts of water were used in the paper-making process.
Google expects that, when fully operational, the power usage effectiveness (PUE) for its new datacenter will be around 1.1, which means that it uses only 10% additional power for cooling, lighting, etc. — an impressive, class-leading figure.
Hamina is just one of three chiller-less, or free cooling ideas that Google has been demonstrating at its European data center locations:
Smart Storage Design
When online backup service provider Backblaze was starting up, it couldn’t buy storage arrays that were inexpensive enough to suit its bargain-basement business model; Backblaze charges its customers $5/month for unlimited space. So the team decided to design their own, based on consumer-class hardware.
The result: the Backblaze Pod. A 4U rackable server, containing as many cheap, desktop-grade SATA drives as could be squeezed in. At 2009 prices, using 1.5 TB drives —at the time the largest drives available at the lowest price per byte — they achieved 67TB of storage for under $8,000. At today’s prices, a similar cost could provide 90TB; a typical 19 inch rack containing nine Pods could store more than ¾ of a petabyte.
A Backblaze Pod, showing its 45 SATA drives (source: Harvard)
It’s not an incredible performer, nor will it offer the ultimate in reliability; both these limitations are thanks to its consumer-class hardware. But it gets the job done for their needs: It provides huge amounts of storage on a bootstrapped startup budget.
The Pod is based on a regular Intel motherboard and Core 2 processor, 4GB of RAM, four SATA cards, nine SATA multiplier backplanes, 45 SATA hard drives, and a dedicated IDE boot drive. Along with a custom-designed chassis with six 120mm fans, it sports a pair of 760W power supplies — not for redundancy, but simply to supply sufficient power to run all the components (it’s less expensive to buy two desktop-grade power supplies than a single supply designed for a large storage server).
Running 64-bit Linux, it serves up three RAID6 volumes over gigabit Ethernet, using JFS; Apache Tomcat serves storage requests over HTTPS, rather than using generic storage technologies, such as iSCSI, fiber channel, NFS, etc.
The entire hardware design is open source; full specifications are at blog.backblaze.com. (The company retains some secret sauce — its techniques for de-duplication, encryption, and managing a data center full of Pods.) The project inspired others to innovate and improve on the original design. Two notable examples are OpenStoragePod and Harvard’s Clean Energy Project.
Backblaze’s CEO, Gleb Budman, gave a short, fluff-free presentation about the Pod and the thinking behind it, in May 2011:
Smart Server Design
Google is a pioneer in designing Intel servers that use less power than conventional, off-the-shelf products. By working with the manufacturers, and buying in bulk, the company employs custom-designed motherboards, incorporating several unconventional ideas.
Here are some of the power-saving server ideas that Google’s disclosed:
Single rail voltage: Conventional Intel servers use a power supply unit (PSU) with three separate voltages: 3.3, 5, and 12 volts. This complexity introduces inefficiencies, which Google discovered it could reduce by using a PSU that only generated 12V. Other voltages required by the system get generated using solid-state regulators on the motherboard. It turns out that this is more efficient, and also allows the next bright idea…
On-board UPS: Conventional data centers use a centralized uninterruptable power supply (UPS) architecture. The AC grid power enters the building at line voltage, and is converted to DC to charge a large number of batteries. When a power failure occurs, the UPS must switch over to generating line voltage AC from its stored battery energy, and do so in the blink of an eye. This is complex and inefficient, compared with Google’s solution, which is to mount a small backup battery on each server tray. This minimizes unnecessary power losses from AC/DC/AC conversion and is sufficient to power the server for around three minutes — enough time to cleanly shut down less critical servers and start up a local power generator.
A typical Google server tray, with 12V power supply on the left, and backup battery on the right (source: Google)
Slow fans: As normally found in desktop PCs, but rarely in servers, Google includes temperature sensors, which allows the server motherboard to only spin the fans as fast as the load requires. This not only saves power, but makes the datacenter a less noisy environment for maintenance staff.
Legacy-free: The custom motherboards include only the hardware that’s required. They do away with unnecessary traditional items, such as graphics chips, excessive USB connectors, parallel ports, etc.
Let HP Do It For You
Do all those ideas sound like pie in the sky? Perhaps they seem like great ideas, but you can’t see how you could implement them in your IT shop? Enter the HP EcoPOD — or HP POD 240a, to give it its proper title.
This new containerized data center product offers optimized free-air cooling (FAC), making it super-efficient — PUEs as low as 1.05, going up to just 1.3 when outside air temperatures are too high for FAC. It’s modular, comes pre-assembled from the factory, and is now available to what HP calls “select early-adopter clients.”
The HP EcoPOD containerized data center module (source: HP)
Lessons to Take Away
We can all learn lessons from these three examples. Here are just a few:
- Use the resources available to you (e.g., very cold water; underground pipes)
- Cheap, consumer-orientated hardware has many limitations, but it may be suitable if you’re talent-rich, but cash-poor (e.g., SATA drives).
- Beware of doing things the same way as they’re usually done (e.g., centralized UPS; multi-rail PSUs).