Tuesday, 9 April 2019

An Introduction to CUDA

CUDA is a parallel computing platform and programming model that makes using a GPU for general purpose computing simple and elegant, is an extension of the C programming language and was developed by nVidia. CUDA programming allows the coder to make use of the enormous parallel computing power of an nVidia graphics card to be able to do basic purpose computation. Before continuing, it is worth discussing this for a bit of bit longer.


CPUs as Intel Core two Duo as well as AMD Opteron are very good at doing one or maybe two jobs at a time and performing those tasks rapidly. Graphics cards, on the opposite hand, are very good at doing a tremendous number of tasks at the same time and performing those tasks relatively fast. To put this into perspective, imagine you have a twenty-inch monitor with a regular resolution of 1,920 x 1200. An nVidia graphics card has got the computational ability to compute the color of 2,304,000 various pixels, many times a minute.

To achieve this feat, graphics cards use dozens, possibly a huge selection of ALUs. Fortunately, Nvidia ALUs are completely programmable, which allows us to harness an unprecedented level of computational power into the programs which we create.


As mentioned previously, CUDA lets the coder reap the huge selection of ALUs inside a graphics processor, and that is a lot more effective compared to the couple of ALUs obtainable in virtually any CPU. Nevertheless, that does place a cap on the forms of applications which are well suited to CUDA.

CUDA is just properly designed for highly parallel algorithms

To run effectively on a GPU, you have to have several a huge selection of threads. In general, the more threads you have, the better. In case you have an algorithm that’s mainly serial.

Subsequently, it doesn’t seem sensible to use CUDA. Many serial algorithms have parallel equivalents, but some don’t. In case you cannot break the problem of yours down into no less than a 1000 threads, then CUDA possibly isn’t the ideal solution for you.

CUDA is very well designed for number crunching

There’s one thing which CUDA excels at – number crunching. The GPU is completely effective at doing 32-bit integer and also floating point operations. It’s GPUs is better designed for floating point computations, making CUDA great for number crunching. Several of the bigger end graphics cards will have double floating point devices. However, there’s just one 64 bit floating point product for every sixteen 32-bit floating point units. Thus using double floating point numbers with CUDA must be stayed away from in case they are not essential for the program of yours.


CUDA is well designed for huge datasets

Majority of modern CPUs have a few megabytes of L2 cache since many programs have very high data coherency. Nevertheless, when operating immediately across a big dataset, say 500 megabytes, the L2 cache might not be as useful.

The memory interface for GPUs is extremely distinct from the memory interface of CPUs. GPUs use huge parallel interfaces to be able to link with it is mind. For instance, the GTX 280 uses a 512 bit interface to it is high end GDDR 3 memory. This particular kind of interface is around ten times faster than a regular CPU to memory interface, that is actually fantastic.

It’s worth noting that the majority of nVidia graphics cards don’t have even more than one gigabyte of memory. Nvidia has specific CUDA compute cards which happen to have as many as 4 gigabytes of ram onboard, however these cards are costlier compared to cards initially meant for gaming.

To write a kernel in CUDA

As mentioned earlier, CUDA could be taken full advantage of when writing in C. This is good news because most programmers have knowledgeable about C. Additionally reported earlier, the primary concept of CUDA includes a huge number of threads carried out in parallel.

What was not stated is the fact that many of these threads are likely to be executing the identical function, known as being a kernel. Understanding exactly what the kernel is and just how it functions is crucial to your success when composing an application which uses CUDA.

The thought is the fact that although every one of the threads of the system of yours is executing the same function, the threads will be dealing with an alternative dataset. Every thread is going to know it has very own ID, and based off it is ID, it’ll decide which pieces of information to focus on. Commands such as if, do, while, for, etc.’ are supported.


Writing programs with CUDA

One important thing to keep in mind is the fact that the entire program of yours doesn’t have to be composed in CUDA. In case you are composing a sizable program, complete with a user interface, and also numerous additional features, subsequently most of the code of yours is going to be written in C++ or perhaps whatever language you prefer.

Next, when something very computationally intensive is required, the application can just call the CUDA kernel function you wrote. And so the primary idea is the fact that CUDA must simply be utilized for most computationally intense areas of the plan of yours.

CUDA without having a graphics card

While CUDA is particularly designed to operate on nVidia’s graphics cards, it can additionally operate on virtually any CPU. Albeit, the system won’t ever be equipped to run almost as quickly on a CPU, it’ll still perform.

SHARE THIS

Author:

Designveloper is the leading software development company in Ho Chi Minh City, Vietnam, founded in early 2013 with a team of professional and enthusiastic Web developers, Mobile developers, UI/UX designers and VOIP experts.

0 comments: