Is it possible to run CUDA on AMD GPUs?

CudaGpuGpgpuNvidiaAmd

Cuda Problem Overview


I'd like to extend my skill set into GPU computing. I am familiar with raytracing and realtime graphics(OpenGL), but the next generation of graphics and high performance computing seems to be in GPU computing or something like it.

I currently use an AMD HD 7870 graphics card on my home computer. Could I write CUDA code for this? (my intuition is no, but since Nvidia released the compiler binaries I might be wrong).

A second more general question is, Where do I start with GPU computing? I'm certain this is an often asked question, but the best I saw was from 08' and I figure the field has changed quite a bit since then.

Cuda Solutions


Solution 1 - Cuda

Nope, you can't use CUDA for that. CUDA is limited to NVIDIA hardware. OpenCL would be the best alternative.

Khronos itself has a list of resources. As does the StreamComputing.eu website. For your AMD specific resources, you might want to have a look at AMD's APP SDK page.

Note that at this time there are several initiatives to translate/cross-compile CUDA to different languages and APIs. One such an example is HIP. Note however that this still does not mean that CUDA runs on AMD GPUs.

Solution 2 - Cuda

You can run NVIDIA® CUDA™ code on Mac, and indeed on OpenCL 1.2 GPUs in general, using Coriander . Disclosure: I'm the author. Example usage:

cocl cuda_sample.cu
./cuda_sample

Result: enter image description here

Solution 3 - Cuda

Yup. :) You can use Hipify to convert CUDA code very easily to HIP code which can be compiled run on both AMD and nVidia hardware pretty good. Here are some links

GPUOpen very cool site by AMD that has tons of tools and software libraries to help with different aspects of GPU computing many of which work on both platforms

HIP Github Repository that shows the process to hipify

HIP GPUOpen Blog

Update 2021: AMD changed the Website Link go to ROCm website

https://rocmdocs.amd.com/en/latest/

Solution 4 - Cuda

You can't use CUDA for GPU Programming as CUDA is supported by NVIDIA devices only. If you want to learn GPU Computing I would suggest you to start CUDA and OpenCL simultaneously. That would be very much beneficial for you.. Talking about CUDA, you can use mCUDA. It doesn't require NVIDIA's GPU..

Solution 5 - Cuda

I think it is going to be possible soon in AMD FirePro GPU's, see press release here but support is coming 2016 Q1 for the developing tools:

> An early access program for the "Boltzmann Initiative" tools is planned for Q1 2016.

Solution 6 - Cuda

These are some basic details I could find.

> Linux

ROCm supports the major ML frameworks like TensorFlow and PyTorch with ongoing development to enhance and optimize workload acceleration.

It seems the support is only for Linux systems.(https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html)

ROCm supports the major ML frameworks like TensorFlow and PyTorch with ongoing development to enhance and optimize workload acceleration. based on HIP

Heterogeneous-Computing Interface for Portability (HIP) is a C++ dialect designed to ease conversion of CUDA applications to portable C++ code. It provides a C-style API and a C++ kernel language. The C++ interface can use templates and classes across the host/kernel boundary. The HIPify tool automates much of the conversion work by performing a source-to-source transformation from CUDA to HIP. HIP code can run on AMD hardware (through the HCC compiler) or NVIDIA hardware (through the NVCC compiler) with no performance loss compared with the original CUDA code.

Tensorflow ROCm port is https://github.com/ROCmSoftwarePlatform/tensorflow-upstream and their Docker container is https://hub.docker.com/r/rocm/tensorflow > Mac

This support for macOS 12.0+( as per their claim )

Testing conducted by Apple in October and November 2020 using a production 3.2GHz 16-core Intel Xeon W-based Mac Pro system with 32GB of RAM, AMD Radeon Pro Vega II Duo graphics with 64GB of HBM2, and 256GB SSD.

You can now leverage Apple’s tensorflow-metal PluggableDevice in TensorFlow v2.5 for accelerated training on Mac GPUs directly with Metal.

Solution 7 - Cuda

As of 2019_10_10 I have NOT tested it, but there is the "GPU Ocelot" project

http://gpuocelot.gatech.edu/

that according to its advertisement tries to compile CUDA code for a variety of targets, including AMD GPUs.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionLee JacobsView Question on Stackoverflow
Solution 1 - CudaBartView Answer on Stackoverflow
Solution 2 - CudaHugh PerkinsView Answer on Stackoverflow
Solution 3 - CudaYeasin Ar RahmanView Answer on Stackoverflow
Solution 4 - Cudasandeep.ganageView Answer on Stackoverflow
Solution 5 - CudaLéo Léopold Hertz 준영View Answer on Stackoverflow
Solution 6 - CudaMohan RadhakrishnanView Answer on Stackoverflow
Solution 7 - CudaMartin VahiView Answer on Stackoverflow