Shopping? Check out our latest product comparisons

Europe's fastest supercomputer uses warm water cooling to conserve energy and heat buildings

By

June 26, 2012

Computer rendition of SuperMUC rendered by SuperMUC (Image: Leibniz-Rechenzentrum der Baye...

Computer rendition of SuperMUC rendered by SuperMUC (Image: Leibniz-Rechenzentrum der Bayerischen Akademie der Wissenschaften)

Image Gallery (15 images)

An innovative cooling design for SuperMUC, Europe's most powerful supercomputer, will use warm water instead of air to keep tens of thousands of microprocessors at the optimal operating speed and increase peak performance. The system, which is said to cool components 4,000 times more efficiently, will also warm the Leibniz Supercomputing Centre Campus that hosts it during the winter months, generating expected savings of up to US$1.25 million per year.

Cooling down a data center is an expensive task, as it can account for up to 50 percent of the total energy consumption of the center. The figure may be even more taxing considering that SuperMUC will be hosted in Germany, a country that, starting from this year, requires all the electricity consumed by state-funded institutions to come from 100 percent sustainable energy sources.

The innovative cooling technology developed by IBM will help address this problem. Using a design inspired by the circulatory system, it transports water as warm as 45 degrees Celsius (113° F) directly to processors and memory components. The system is 10 times as compact and consumes 40 percent less energy than a comparable air-cooled system.

According to the recently revised Top500 list, SuperMUC's impressive 18,000 energy-efficient Intel Xeon processors make it Europe's fastest supercomputer, clocking 3 petaflops (3 million billion floating point operations per second). That's a long way from the new number one on the list - the IBM Sequoia, which is seven times as fast - but with a performance comparable to that of 100,000 personal computers put together, SuperMUC isn't exactly slow, either.

This impressive number-crunching capacity will be used to aid a number of research projects across Europe, ranging from simulating the blood flow generated by an artificial heart valve to improving our understanding of earthquakes. The SuperMUC system is also connected to powerful visualization systems, including a large stereoscopic power wall and a five-sided immersive artificial virtual-reality environment for visualizing three-dimensional data sets.

The engineering team behind the project is targeting an aggressive reduction in size, saying they can reduce the volume tenfold every five years until, 30 years from now, the entire processing power of the data center will be contained in a form factor the size of a standard desktop computer, with a much higher energy efficiency than it has today.

The project is jointly funded by the German federal government and the state of Bavaria. SuperMUC will be officially inaugurated in July 2012 at the Leibniz Supercomputing Centre in Garching, Germany. The video below explains more about the cooling mechanism.

Source: IBM

About the Author
Dario Borghino Dario studied software engineering at the Polytechnic University of Turin. When he isn't writing for Gizmag he is usually traveling the world on a whim, working on an AI-guided automated trading system, or chasing his dream to become the next European thumbwrestling champion.   All articles by Dario Borghino
Tags
9 Comments

Have they considered the use of a fluid other than water? My first thought was, I hope it doesn't spring a leak! Water is the cheapest liquid, obviously, but maybe a refrigerant could work more efficiently, and would not short circuit in the event of a leak. I am intrigued by the need to use warm water. I thought that circuits work better when they cold. Interesting that the heat exchange is totally localised. Is it really the best way to remove excess heat?

windykites1
27th June, 2012 @ 05:22 am PDT

Do you know what is an even better coolant than warm water? Non-warm water.

Snake Oil Baron
27th June, 2012 @ 11:54 am PDT

re; windykites1

Deionized water does not conduct electricity, evaporates without leaving a residue, and carries a large amount of heat per volume.

The chips are cooled in a parallel to give all the chips equal cooling, and the central heat exchanger used to dump the heat into the buildings environmental system gives even water temperature to the chips and works well with the air handlers.

Slowburn
27th June, 2012 @ 11:57 am PDT

I followed the link to a similar Gizmag story and the idea seems to be that warm water saves money because chilling it is expensive. But water usually does not come out of the ground or reservoirs at 60 degrees C so why warm it? If they mean that the water is going to warm up that much in the machines after a few passes so it will be continuously warm it would still be a simple thing to run it through some coiled pipes to cool it before recycling it. I just don't get the reason for using *warm* water.

Snake Oil Baron
27th June, 2012 @ 12:03 pm PDT

Rising energy costs along with increased focus on reducing the environmental impact of large scale datacenter deployments is allowing for some pretty interesting innovation on this front.

NextDC's M1 facility in Melbourne is showing similar ingenuity by having triple power sources (Utility A/C, Natural Gas Gen, Diesel Gen) configured in a N+1 scenario. This will ensure costs are controllable despite carbon taxes, etc.

http://nextdc.com/blog/132-m1-carbon-tax-and-our-tri-gen-plants-for-melbourne.html

Necessity is the mother of invention and there's no necessity like keeping an eye on your hip pocket.

Dave Hill
27th June, 2012 @ 01:38 pm PDT

The idea is economy here. It does not pay to spend millions on chilling when the performance gain for that set-up would not be like a home system. This type of system needs far less cooling to maintain the "circulatory system" at the needed temperature. They gain more savings in the cold weather when they can use the waste heat to heat other buildings. It is a balance. Sure they could chill the system, as I said for millions, for a minimal performance gain and use the waste heat for heating other buildings, but the cost of chilling is not near to being offset by the money saved heating those buildings.

FredExII
27th June, 2012 @ 05:03 pm PDT

It has been my experience that refrigerating the computer case does not make the chips faster it just creates a larger temperature differential that speeds the heat transfer out of the chip allowing the chips to be overclocked without overheating. Water being a much better coolant than air does not need as large of a heat differential to provide the same level of heat removal so warm water can provide the same amount of cooling as chilled air.

Slowburn
28th June, 2012 @ 03:22 am PDT

Maybe they should have made use of the term Optimal... Even cooling the Chips to 100 Degrees C, as long as they are maintained at a constant temperature all of the excess energy is removed.. therefore the is no risk of overheating... (Depending on the max design temperature)

(IBM states "the world's first commercially available hot-water cooled supercomputer", so the qualifiers are "Hot water", and "super computer", there are water cooled server farms already..

Not a lot New... (IBM have been using water cooling for several years... customers just need to catch on)

Quote from 2007...."Water cooling is both more efficient than air cooling and can handle higher heat loads, simply because water is far more conductive of heat and has much higher thermal mass than air. It's been slow to catch on because administrators are paranoid about leaks (water and electronics certainly don't mix well), but systems are available now that have been proven reliable. IBM and HP have water-cooled server racks, and Knurr's even won a design award." http://www.ecogeek.org/content/view/1140/71/ )

They didn't say that the water is heated using external energy... a few passes through the system and the water will be hot. Thermostatically controlling the hot side (as in car radiators, only using liquid to liquid exchangers with a large thermal pool for sinking the heat) Using the waste heat to heat buildings is another logical step, (I hope no new patents have been granted for that..., even generating electricity from low-grade heat sources can be improved upon...)

People who know nothing about thermodynamics will just go "Wow"...

Its all about energy removal....

Even the turbulence of the coolant in the pipes affects the transfer rate...

Pretty much the energy transport will usually be the "slowest part of the system", not the conductance through the walls of the heat sinking channels... Higher turbulence increases the convection to remove the heat.... Low turbulence gives a slower transfer rate, but longer pipe / channel life)

Probably take home, is that Hot water controls the temperature of the chips better than expensive cold air...

MD
29th June, 2012 @ 09:27 pm PDT

Notice the mention of massive further increases in efficiency as they shrink the whole design. As the channels (capillaries?) get smaller, the surface area increases, speeding heat transfer rates.

Brian H
4th July, 2012 @ 07:07 pm PDT
Post a Comment

Login with your gizmag account:

Or Login with Facebook:


Related Articles
Looking for something? Search our 28,145 articles