It’s easy to think of the Internet as something that’s just “out there” in cyberspace, that doesn’t effect the physical world in any tangible way. In 2009, however, it was estimated that Internet data centers worldwide consumed about 2% of global electricity production. Not only did most of that electricity undoubtedly come from non-green sources, but it also cost the global economy approximately 30 billion US dollars. Much of the electricity was needed to power the data centers’ forced air cooling systems, that keep the servers from overheating. Now, researchers from IBM Zurich and the Swiss Federal Institute of Technology Zurich (ETH) have devised a much more efficient method for cooling the steamy Internet - they use hot water.
Why water?Liquid cooling is by nature a much more effective cooling method, as the heat capacity of water is over 4,000 times that of air. Also, once the heat is transferred to the water, it can be handled more efficiently. In IBM/ETH’s model, the server-heated water could even go on to provide heat for the local community.
But why HOT water?Chilled water has been used to cool mainframes, and it certainly does the job, but there’s a catch - chilling that water requires a lot of electricity. The Swiss process uses water that’s at 60-70C (140-158F), which is still cool enough to keep the servers’ chips below their “red line” of 85C.
How it worksComputers and many other electrical devices dissipate heat using something called a heat sink. Heat sinks look like a row of closley-spaced upright rectangular metal blades, and they work by dramatically increasing the device’s surface area - not unlike an elephant uses its giant ears to increase its own cooling surface area. IBM/ETH’s process uses what they call a microfluidic heat sink. It contains a network of tiny channels which the water is pumped through, absorbing heat from the metal along the way.