Introducing the Gizmag Store

What do you get when you cross a CPU with a GPU? AMD’s Fusion family of APUs

By

June 3, 2010

The AMD Fusion ushers is what AMD says is a significant shift in processor architecture an...

The AMD Fusion ushers is what AMD says is a significant shift in processor architecture and capabilities.

Image Gallery (9 images)

At Computex 2010 AMD gave the first public demonstration of its Fusion processor that combines the Central Processing Unit (CPU) and Graphics Processing Unit (GPU) on a single chip. The AMD Fusion family of Accelerated Processing Units (APUs) not only adds another acronym to the computer lexicon, but ushers is what AMD says is a significant shift in processor architecture and capabilities.

AMD says combining the CPU, GPU, video processing and other accelerator capabilities in a single-die design provides more power-efficient processors that are better able to handle demanding operations such as HD video, media-rich Internet content and DirectX 11 games – AMD hasn’t revealed the technical specs of the GPUs it will embed in its APUs, but has disclosed they will be DirectX 11 compliant.

Many of the improvements stem from eliminating the chip-to-chip linkage that adds latency to memory operations and consumes power - moving electrons across a chip takes less energy than moving these same electrons between two chips. The co-location of all key elements on one chip also allows a holistic approach to power management of the APU. Various parts of the chip can be powered up or down depending on workloads.

“Hundreds of millions of us now create, interact with, and share intensely visual digital content,” said Rick Bergman, senior vice president and general manager, AMD Product Group. “This explosion in multimedia requires new applications and new ways to manage and manipulate data. Low resolution video needs to be up-scaled for larger screens, HD video must be shrunk for smart phones, and home movies need to be stabilized and cleaned up for more enjoyable viewing. When AMD formally launches the AMD Fusion family of APUs, scheduled for the first half of in 2011, we expect the PC experience to evolve dramatically.”

The demonstration at Computex was the first public display of the chip that AMD has been working on for a while now. In it AMD emphasized that the new Fusion APUs are designed to simplify the task consumers face in choosing a PC that is right for their needs.

PC and PC component manufacturers have made this promise before but most consumers are still bamboozled by the acronyms and range of specifications they are forced to wade through when purchasing a new computer. So we’ll have to reserve judgment until we see if AMD can deliver on this front.

Computex 2010 also saw AMD unveil its “AMD Fusion Fund,” a program designed to make strategic investments in companies developing new, enhanced digital experiences that take advantage of the forthcoming AMD Fusion family of APUs.

One of the companies looking to take advantage of the benefits of the Fusion APUs, but probably doesn’t need any handouts from the Fusion Fund, is Microsoft, whose corporate vice president, original equipment manufacturer division, Steven Guggenheimer, joined AMD on stage at Computex.

“While visual computing has made incredible strides in recent years, we believe that the AMD Fusion family of APUs combined with Windows 7 and DirectX 11 will fundamentally change how applications are developed and used,” said Guggenheimer. “Applications such as Internet browsing, watching HD video, PowerPoint and more can enable more immersive, visually rich, and intuitive experiences for consumers worldwide.”

With their claims of improved performance and power efficiency the most obvious target for AMD’s new APUs are ultraportables, and that’s where they’ll most likely be showing up initially when AMD officially launches its Fusion family of APUs in the first half of 2011.

About the Author
Darren Quick Darren's love of technology started in primary school with a Nintendo Game & Watch Donkey Kong (still functioning) and a Commodore VIC 20 computer (not still functioning). In high school he upgraded to a 286 PC, and he's been following Moore's law ever since. This love of technology continued through a number of university courses and crappy jobs until 2008, when his interests found a home at Gizmag.   All articles by Darren Quick
Tags
5 Comments

There are 2 problems with this. Firstly heat management will have to be worked on bigtime with the 2 hottest components being pushed so close together and also for those who like to upgrade it will make it way more expensive and less customisable.

David Anderton
3rd June, 2010 @ 12:44 am PDT

David: Problems, as in, things they haven't solved yet? Not necessarily so. It will be interesting however, to see what compatibility, if any, exists between CPU socket types. I'd have to imagine that this will be an entirely new beast requiring a new motherboard that likely won't be backward compatible with existing CPU(s). Just some guesswork on my part.

Joe Khoobyar
3rd June, 2010 @ 09:04 am PDT

Let's hope they develope some Linux drivers for this.

Facebook User
4th June, 2010 @ 05:58 am PDT

The bandwidth between components will be higher and latency lower. I am going to guess that they will share a memory bus as well, allocating it to where it is needed. This was going to happen eventually, and when AMD bought ATI it was going to happen sooner rather than later.

Facebook User
4th June, 2010 @ 07:17 am PDT

Ok, I have to comment about the cooling. The chip die is going to be the same size no matter what is on it. You can only fit some many transistors into a chip. Currently they have been pushing the number of cores, AMD has a 12 core processor, and could probably up it further. In a CPU GPU system, some of those cores, my guess would be 6-8 would be removed to allow for the GPU. So cooling would be no different than it is now.

Yes you have to be smart about how you cool it, but it's not like adding a GPU to it is going to make it hotter. Basically the reason we've gone to multi-core chips at all is because the chip manufacturers had WAY too many transistors free so they started replicating the cores and adding some glue logic, so that all the transistors would be used. It's just a factor of Moore's law. Double the transistor count every 18 months, Industry has been paying gobs of money to stay on this path.

This is an inevitability, due to the fact that there are an ever increasing number of transistors on a chip. Basically eventually most of your computer will be on a single chip, it removes latency between parts, and the only issue is how to effectively get the signals in and out, because even as you add more stuff to a chip, you don't add any more pins, and you have to be able to effectively be able to access everything of the chip.

Chip designers were discussing entire systems on a chip over 10 years ago, with the knowledge that it would be eventually possible. If Moore's law continues, you'll see RAM, and even maybe eventually SSD memory include on the single chip, leaving you with little more than peripherals and Optical drives left in your system.

Richard Underwood
4th June, 2010 @ 09:45 am PDT
Post a Comment

Login with your gizmag account:

Or Login with Facebook:


Related Articles

Just enter your friends and your email address into the form below

For multiple addresses, separate each with a comma




Privacy is safe with us because we have a strict privacy policy.

Looking for something? Search our 26,501 articles