Tag Archives: gpu

Flash Video is Better With AMD and ATI


Add to iTunes | Add to YouTube | Add to Google | RSS Feed

There are a lot of people making and watching videos online these days. When I visited the AMD campus in Texas during the SXSW conference recently, Casey showed me how AMD is helping people all over the world enhance their Flash experience. Adobe Flash 10.1 is a collaborative effort between Adobe and the team at AMD.

This new version of Flash takes advantage of the benefits that are enabled through AMD’s ATI Stream technology. This provides viewers with improved playback, reduced usage of their CPU, and longer battery life due to the comination of using all of the resources found in both their CPU and GPU.

The GPU is a much more efficient way of processing video than the CPU is. As we move more to a mobile lifestyle, your battery will wear down faster if you are processing video using your CPU. AMD wanted to make this much more efficient, and take advantage of different components of the computer or mobile system.

In addition to better battery life, the video footage will actually look better. With hardware acceleration enabled, videos are sharper and are much clearer. You’ll even see more vibrant colors. Even the whites are whiter using this technology.

You can experience this for yourself right now. Make sure you have the updated video driver from AMD installed. Then, of course, you’ll need to download and install the beta of Adobe Flash 10.1 to your system.

Thanks to the team at AMD for the help they provided to me to attend SXSW, and for all of the time they spent with me to discuss what’s new and exciting at AMD / ATI.

Want to embed this video on your own site, blog, or forum? Use this code or download the video:

AMD's Vision


Add to iTunes | Add to YouTube | Add to Google | RSS Feed

During my trip to SXSW last week, I spent some time at the AMD campus. I was able to talk briefly with Raymond from the AMD Product Marketing Group. I had heard that AMD has a vision for Vision, and wanted to get more information for all of you. It’s a difficult process to buy a new computer these days, with all of the choices there are. AMD Vision was created to simplify that process for you.

When a consumer walks into a store, they typically have ideas as to what they want to buy. Vision, Vision Premium and Vision Ultimate designations are similar to “good”, “better” and “best” hardware configurations. What this will do is give the purchaser a much better idea of what they will get with any particular model, to help them decide if it is the right one for them.

Even without knowing a model number, people can compare models against each other by using the Vision designation. People don’t necessarily care about the tech terms, such as “gigabyte”. They want to be able to look at something, and just know it will work for them. With this program, it’s simple for an average computer user to figure out what is what, and what will do the job.

For instance, let’s say a woman walks into the store and needs to buy a notebook that will allow her to surf the web and watch movies. The Vision line would be perfect for that. However, if she also needed to do some light video processing, she would be better off choosing the Vision Premium model. If she’s a power user who does a lot of processor-intensive work (or even a gamer), she’ll need to go with a machine designated with the Vision Ultimate tag.

This is the vision of Vision – to simplify the buying process, and make it more intuitive. AMD has been seeing very positive results with this program. People understand what they are trying to convey, and are using it to their advantage.

From what I’ve seen, I have to agree. Vision is fantastic, and I highly recommend using this system when you are looking to buy your next computer.

I appreciate the folks at AMD sponsoring my trip to SXSW, and giving me the opportunity to get important product information to our community.

Want to embed this video on your own site, blog, or forum? Use this code or download the video:

AMD Questions and Answers


Add to iTunes | Add to YouTube | Add to Google | RSS Feed

Thanks to AMD for allowing our community to ask them questions. Yes, we have an open line of communication with them – so don’t think it begins and ends with this video. If it weren’t for AMD, I wouldn’t have been able to attend SXSW (and, of course, visit AMD’s campus in Austin, TX).

Does ATI have any plans to counter Nvidia’s Fermi? Most people actually say that Fermi are trying to counter what ATI is doing. If you look, ATI has launched several DX-11 based graphics cards over the past several months, whereas Fermi has not launched a single one. The question should be thrown to Fermi, instead.

What is the best ATI video card out there? – If you’re looking for a single GPU solution, it would be the 5870. If you’re looking for the best overall (and fastest) graphics card, you’ll want to check out the HD 5970.

What is your favorite technology in the new series of graphics cards? – It’s going to depend on the person, and their usage of their graphics card. There are two technologies that make the ATI line the best there is. DirectX 11 is the one that is the most useful, as well as the most widely-demanded. DirectX 11 was introduced with Windows 7, and is now available on Vista. It brings additional performance and quality capabilities. For power users, the best feature is the Eyefinity technology. It’s a game changer.

What is the difference between the 4800 series cards, and the 5800 series cards? – Both cards are fantastic for DirectX 9 and DirectX 10 games. The 4800 is still a good value, but with the 5800 series you will have just a bit more performance.

If you were building an extreme gaming PC (no matter the cost), which GPU would you choose? – One of the employees answering questions recently did this exact thing. He chose the Phenom II X4 965 Black Edition Quad Core processor. He wanted this both for the ease of use with Eyefinity, as well as the sheer power of the hardware. This processor incorporates 6MB of unified high-speed L3 cache and a high-speed DDR2/DDR3 memory controller.

Thanks again to everyone at AMD – not only for helping me with my SXSW trip, but for being so fantastic to work with during the event.

Want to embed this video on your own site, blog, or forum? Use this code or download the video:

Are GPUs the future of CPUs?

Kerrick Long is a geek who submitted the following question to me this weekend:

I’ve been thinking recently about the advances that chipset makers have been making with GPUs, and how the CPU market seems to be stagnant at the moment. Recently, ATI made the first GPU clocked at 1GHz available with the Radeon HD 4890, factory over-clocked. In addition, that same card has 800 stream processors, while most CPUs have only two or four cores. Those cards also use GDDR5 memory, consequently.

On the software side, Snow Leopard is integrating OpenCL into the Mac OS, allowing developers to harness the power of the GPU in applications that traditionally might’ve just used the CPU. CUDA from NVIDIA also allows C programmers to harness some of the power of their GPUs with the right compilers, wrappers, and drivers.

All of this combined makes me wonder if someday soon we might see the death of the CPU as we know it today, to be replaced by a closer relative of the GPU. If GPU clock speed caught up with CPU clock speeds, the benefits of parallel processing available to the GPU would be immense! I know that changing architectures on an OS level would be quite a task (considering that Microsoft and Apple are just now making the switch from the decades-old x86 architecture), but do you think it will happen? Could the CPU be on its way out the door?