Refrigerators for IT servers? Cool
Naissus applies its own version of thermal management to take the toxic heat out of data centres.
print issue - Plant
Moore’s Law holds that the capacity of a microchip to perform calculations doubles every 24 months – an exponential improvement in IT function that has dramatically enhanced digital electronics and the world economy. But it comes with mounting challenges.
Maintaining data centres – where servers stacked with the latest super-fast processors from chipmakers such as Intel Inc. and Advanced Micro Devices Inc. (AMD) act as brain hubs for organizations as diverse as military installations, financial institutions and public utilities – requires mission critical operations. If these systems fail, the consequences would be disastrous.
While keeping up to date with IT evolution is one thing, over 10 years a company’s server system will need to be refreshed three times, but there is another issue: how to handle the heat.
“Heat in a data centre is toxic to servers,” says Peter Jeffery, president and CEO of Toronto-based Naissus Thermal Management Solutions Inc. “The thermal management systems and data centres haven’t come anywhere close to handling the kind of capacity curve increase we’ve seen in the past 10 years.” .
For most small to large enterprises, data processing systems (clusters of data server and storage devices) are kept in cool rooms, sometimes in buildings over one million square feet, and cooled by forced-air from computer room air conditioning (CRAC) units. Real estate is expensive and as the power of the servers expanded, so too did the need for larger rooms. Not to store the many servers, which were becoming increasingly transistorized, but to accommodate the larger CRAC units needed to cool the additional heat generated from the more powerful servers.
“When you go to the grocery store and buy a quart of milk, you don’t put that milk on the counter and flood the room with cold air to keep that milk cold,” says Jeffery. “You put it inside a refrigerator. It’s the same kind of concept. We build refrigerators for servers.”
Jeffery and his partner Mirko Stevanovic became aware of the air-cooled solution’s limits while working for an international electronics manufacturing services provider in Toronto more than 10 years ago. Meanwhile, in Santa Clara, Calif., a former University of Alberta engineering professor was working as chief technology officer for Exodus Communications building data centres and rising to guru status for his efforts.
In three-and-a-half years, Paul DeGroot built 27 data centres worth an estimated US$1.2 billion, which took up 2.5 million square feet of real estate. He was among the first to build co-location data centre services, and his initial clients in 1995 for those sharing data centre space were Yahoo, Microsoft Corp.’s Hotmail and eBay.
DeGroot says between 1997 and 2000 Hotmail was registering 90,000 new clicks per day, meaning new customers looking for information. “Hotmail was growing so fast and began to use so much power that my data centres couldn’t handle the load anymore.”
Less space for more thermal load
At the same time, Jeffery was responsible for overseeing the installation of those components in client data centres. “There was a huge dot-com data centre boom underway back then,” he says. “Everyone was building them.”
Indeed, the net density went, in a space of a decade, from roughly 1.5 kilowatts (kW) per cabinet to 30 kW. “And that exponential increase in density really drove changes in the data centre infrastructure,” Jeffery says.
For one thing, the thermal load created by servers with five to six processors exceeded 150 to 200 kW per-square-foot so cooling the area around the servers required more space using CRAC units.
“In the old days, and it’s still pretty prevalent today, these CRAC units were very large systems that could be anywhere from three-feet-wide, nine-feet-long to six- or seven-feet-high. They blow cold air under a false or plenum floor. [The air] exits through perforated floor tiles and grates and is put up in front of servers,” says Jeffery.
“Once we got beyond a certain level of density, that type of technology wasn’t adequate to efficiently cool higher density loads.”
Because the company Jeffery was working with at the time was building thermal management systems for servers, it tended to be ahead of the marketplace. “We knew this huge exponential increase in thermal density was going to set the data centre market on its ear.”
He brainstormed with business partner Mirko Stevanovic, who earned his master’s degree in mechanical engineering at the University of Waterloo specializing in heat transfer in micro-electronics. “We said, ‘If you were to do this all over again, what would you do?’ Our take on it was that you wouldn’t necessarily cool the entire room. You would cool the area just immediately around the servers to start with. Then you would use water as a thermal transport medium because it’s 3,631 times more efficient. It has more capacity to remove heat.”
While pulling out his hair in 2006 trying to build the infrastructure to cool a data centre with increasing thermal density in Altanta, DeGroot was spending a fortune doing “crazy” things to install an air-cooled solution.
“Unbeknownst to me, Jeffery and his partner Stevanovic had been trying to get an appointment with me through my secretary. God knows how they got my attention but they did. They showed me the first water-cooled cabinet, originally developed for IBM [Corp.]. And when I saw that presentation, a light went on and I said, ‘Voila! That’s it!’”
Naissus puts a radiator or heat exchanger right in the base of the cabinet to pull the hot air off the backside of the servers, which goes down into a duct that’s built into the rear door of the enclosure. As the hot air is blown across a heat exchanger coil it’s transferred back to a chilled water supply system. “The cool air is put up in front of the server so that we create a continuous closed-loop inside of the enclosure,” says Jeffery. “The ambient environment is not used to cool the servers. We are basically creating a small micro-environment for the servers inside of the enclosure.”
The idea was developed further and work continued on the water-cooled system leading to the creation of Naissus with Jeffery as principal and president and Stevanovic as principal and chief thermal architect. DeGroot joined them in 2009 as principal and chief application engineer. Since 2010 they have installed 258 systems around the world, including some mission critical military applications.
Naissus partners with local manufacturers to serve their customers’ needs, allowing the principals to focus on the core business. Nelson Industrial Inc., based in Pickering, Ont., builds components and subsystems to Naissus’ specifications, while Great Lakes Case and Cabinet Co. Inc. from Erie, Pa. makes standard enclosure systems that Naissus customizes in Toronto.
The market potential is enormous. DeGroot says there are hundreds of existing legacy data centres that are now obsolete. To upgrade the data centre to the current blade server technology these firms will need a water-cooling solution. “It can’t be done with aircooling technology,” DeGroot stresses. “It’s just not financially feasible.”
The Naissus system has an attractive feature: it can be specified for LEED-certified buildings thanks to a rating power usage effectiveness (PUE) rating as low as 1.25. “We can build a 20 megawatt data centre in about 45% to 52% less space,” says DeGroot, who notes some customers can realize a return on investment within two years.
The energy savings potential should be of interest to corporations that have invested heavily in data centre infrastructure. According to Ken Brill, executive director of The Uptime Institute, which monitors data centre performance, the top third data centres increased power consumption by 20% to 30% between 2004 and 2006.
“The cost of energy has seldom been a concern for IT departments in the past and there was little incentive to invest in energy efficiency improvements. But as data centre energy costs become more visible, the financial benefits of moving to a greener mode of operation are being recognized by CEOs, CFOs and CIOs,” analyst Eric Woods wrote for Pike Research in August 2010.
Brill says most data centres more than five years old are obsolete. “As a result we are on the verge of the biggest data centre construction boom in history.”
Naissus is poised to take the heat out of that boom.
Kim Laudrum is a Toronto-based business writer who specializes in manufacturing issues. E-mail email@example.com.
This article appears in the March 2012 edition of PLANT.