What is Vulkan?

Vulkan is not only a planet in the Star Trek universe, but also a multimedia API. Who stands behind it and what can Vulkan do?

Vulkan is an application programming interface (API) with focus on 2D and 3D graphics. Since it was planned as a successor to OpenGL, the API was first called Next Generation OpenGL or glNext. As a so-called low-level API or low-overhead API, Vulkan enables to program nearer to the hardware than, for example DirectX, giving developers direct access and more control over the graphics unit. In addition, the work can be distributed better to the different CPU cores. All this increases the computing power and efficiency while at the same time reducing the number of drivers and driver overhead.

How was Vulkan developed?

Vulkan was developed by the Khronos Group. It is based on AMD’s low-level API Mantle, that is, it was built on components from Mantle. AMD donated its API to the Khronos Group, so that they had a basis to develop its own low-level API, which could then apply as a cross-platform standard for the entire industry. Vulkan was first announced at the GDC in 2015 and was released in February 2016. The current version is 1.1.101 and dates from February 2019.

Who can use Vulkan?

Vulkan is open source as well as cross-platform and is supported by all major hardware manufacturers, including Intel, AMD and Nvidia. In addition, it is compatible with various operating systems and can thus be used under Windows, Linux, Android, macOS, iOS and others. Consequently, Vulkan also runs on a variety of devices, such as PCs, consoles, smartphones and embedded platforms.

More information can be found at Techcrunch.

Please enter these characters in the following text field.

The fields marked with * are required.

Related products

25 Feb 2019 Array ( [id] => 385 [title] => What is OpenGL? [authorId] => [active] => 1 [shortDescription] => After introducing you to DirectX last week, we are going to move on to the next API and address the question: What is OpenGL? [description] =>

The term OpenGL is the abbreviation of “Open Graphics Library” and describes an API (application programming interface) for developing 2D and 3D graphics applications. OpenGL is cross-platform and multi-language programming. That means, as with DirectX, the API facilitates the development of graphics applications and software. They only need to be adapted to the OpenGL standard and not to various operating systems and graphics hardware. The OpenGL standard describes about 250 commands. Other organizations – such as manufacturers of graphics cards – may define proprietary (i.e. manufacturer-bound) extensions.

Where is OpenGL used?

Applications using OpenGL include computer games, virtual reality, augmented reality, 3D animation, CAD and other visual simulations.

How is OpenGL used?

OpenGL is supported by most popular operating systems, including Microsoft Windows, MacOS, Solaris, Linux, Android, Apple iOS, Xbox 360 and more. The API has language bindings for the programming languages C, C++, Fortran, Ada and Java.

How was OpenGL developed?

OpenGL was released in 1992. Originally the former PC manufacturer Silicon Graphics (SGI) developed the proprietary programming interface IRIS GL. After some time, the API was overhauled, the proprietary code removed and IRIS GL was released as the industry standard OpenGL. New features were often first introduced as manufacturer-specific extensions. Over time, they were used across manufacturers to then be introduced as new core features. Like this OpenGL has been developed further up to the current version 4.6. Since July 2006, the Khronos Group – an industrial consortium including Intel, AMD, Nvidia, Apple and Google – is responsible for the development of OpenGL.

What is the future of OpenGL?

In March 2015, the API Vulkan was presented as the successor to OpenGL at the Game Developers Conference. Initially known as “Next Generation OpenGL” or “glNext”, the programming interface is open source and also cross-platform. The difference to OpenGL is that more focus is placed on the hardware during programming, which significantly increases the computing power. Some PC games already support Vulkan, but most use DirectX. Vulkan is also developed by the Khronos Group.

[views] => 5 [displayDate] => DateTime Object ( [date] => 2019-02-25 13:00:00.000000 [timezone_type] => 3 [timezone] => Europe/Berlin ) [categoryId] => 234 [template] => [metaKeyWords] => [metaDescription] => [metaTitle] => [tags] => Array ( ) [author] => [assignedArticles] => Array ( ) [media] => Array ( [0] => Array ( [id] => 2290 [blogId] => 385 [mediaId] => 50314 [preview] => 1 [media] => Array ( [id] => 50314 [albumId] => 24 [name] => OpenGL [description] => [path] => media/image/OpenGL.png [type] => IMAGE [extension] => png [userId] => 56 [created] => DateTime Object ( [date] => 2019-11-07 00:00:00.000000 [timezone_type] => 3 [timezone] => Europe/Berlin ) [fileSize] => 5533661 [width] => 3000 [height] => 2000 ) ) ) [attribute] => Array ( [id] => 382 [blogId] => 385 [attribute1] => NULL [attribute2] => [attribute3] => [attribute4] => [attribute5] => [attribute6] => [digi1Inactivateblogarticle] => 0 [digi1Sponsoredpost] => 0 [digi1Featuredpost] => 0 [digi1Hideblogdetailsite] => 0 [digi1Showleftsidebarblogdetailsite] => 0 [digi1Disablecommentfunction] => 0 [digi1Hideimageslider] => 0 [digi1Relatedblogarticle1] => [digi1Relatedblogarticle2] => [digi1Relatedblogarticle3] => [digi1Relatedblogarticle4] => [digi1Relatedblogarticle5] => [isReference] => 0 [relatedItem] => ) [comments] => Array ( ) ) 1
know-how
What is OpenGL?
After introducing you to DirectX last week, we are going to move on to the next API and address the question: What is OpenGL?
4 Mar 2019 Array ( [id] => 389 [title] => What is CUDA? [authorId] => [active] => 1 [shortDescription] => While launching our new Mini-PCs QUADRO P1000 and TEGRA 2 we already talked a lot about NVIDIA CUDA and the so-called CUDA cores. But what is CUDA actually? [description] =>

What means "CUDA"?

The term CUDA is the acronym of "Compute Unified Device Architecture".

What is CUDA?

CUDA is an NVIDIA architecture for parallel calculations. The computing power of a PC is increased by using the graphics processor as well.
In the past, OpenGL and DirectX were the only way to interact with GPUs, but these APIs were mostly suited for multimedia applications. In contrast, calculations were only performed on the CPU.

Since graphics cards are ideal for computation-intensive, parallel processes, new operating systems (Windows 7 and up) no longer use GPUs only for graphics calculations, but as a general-purpose parallel processor that can be accessed by any application. Like that, calculations run parallel on the CPU and the graphics processor, which increases the performance enormously. NVIDIA CUDA supports this and enables easy and efficient parallel computing. There are now thousands of applications, countless research reports and a wide selection of CUDA tools and solutions.

What is a CUDA core?

Usually, CUDA cores are considered equivalent to CPU cores. However, the CUDA cores are less complex and at the same time appear in much larger numbers. While the usual Intel CPUs have between 2 and 8 cores, for example, the NVIDIA Quadro P1000, which is installed in our identically named Mini-PC, has 640 CUDA cores. High-End graphics cards, such as NVIDIA’s Turing generation, often have over 4000 cores. This high number is necessary because often many complex graphics calculations have to be performed simultaneously. However, since GPUs are specialized for this purpose, the cores are also constructed much more specific and are therefore smaller than the cores of CPUs.

A detailed explanation of this topic can be found at Gamingscan. If you want to get even deeper into the topic and are interested in the exact difference between CUDA cores and CPU cores, you should check out the video "Why CUDA 'Cores' Aren’t Actually Cores" from Gamers Nexus.

In which areas is CUDA used?

CUDA is used in a variety of fields. On the one hand in image and video processing, but also in the medical field, for example in CT image reconstructions. The fields AI, deep learning and machine learning also often rely on CUDA, because they require sophisticated development environments. Other topics include computer biology and chemistry, raytracing, seismic analysis and more.

Which is the current version of CUDA?

Since CUDA was introduced in 2006, it has evolved enormously. In October 2018, CUDA 10 was unveiled, along with the launch of the new Turing GPUs. More information about the new features can be found on the NVIDIA Developer Blog.

How is CUDA programmed?

When using CUDA, the programming languages C, C++, Fortran, Python and MATLAB can be used.

How can CUDA be used?

With CUDA you can work under Windows, Linux and MacOS – given that you have the right hardware. These are the graphics cards of the NVIDIA series GeForce, Quadro and Tesla as well as NVIDIA GRID solutions. An overview of CUDA enabled GPUs can be found on NVIDIA’s website. The CUDA Toolkit can be found there as well.

[views] => 9 [displayDate] => DateTime Object ( [date] => 2019-03-04 12:15:00.000000 [timezone_type] => 3 [timezone] => Europe/Berlin ) [categoryId] => 234 [template] => [metaKeyWords] => [metaDescription] => [metaTitle] => [tags] => Array ( ) [author] => [assignedArticles] => Array ( ) [media] => Array ( [0] => Array ( [id] => 4139 [blogId] => 389 [mediaId] => 50310 [preview] => 1 [media] => Array ( [id] => 50310 [albumId] => 24 [name] => CUDA [description] => [path] => media/image/CUDA.png [type] => IMAGE [extension] => png [userId] => 56 [created] => DateTime Object ( [date] => 2019-11-07 00:00:00.000000 [timezone_type] => 3 [timezone] => Europe/Berlin ) [fileSize] => 5066943 [width] => 3000 [height] => 2000 ) ) ) [attribute] => Array ( [id] => 386 [blogId] => 389 [attribute1] => NULL [attribute2] => [attribute3] => [attribute4] => [attribute5] => [attribute6] => [digi1Inactivateblogarticle] => 0 [digi1Sponsoredpost] => 0 [digi1Featuredpost] => 0 [digi1Hideblogdetailsite] => 0 [digi1Showleftsidebarblogdetailsite] => 0 [digi1Disablecommentfunction] => 0 [digi1Hideimageslider] => 0 [digi1Relatedblogarticle1] => 381 [digi1Relatedblogarticle2] => 385 [digi1Relatedblogarticle3] => 391 [digi1Relatedblogarticle4] => [digi1Relatedblogarticle5] => [isReference] => 0 [relatedItem] => ) [comments] => Array ( ) ) 1
know-how
What is CUDA?
While launching our new Mini-PCs QUADRO P1000 and TEGRA 2 we already talked a lot about NVIDIA CUDA and the so-called CUDA cores. But what is CUDA actually?
11 Mar 2019 Array ( [id] => 391 [title] => What is Mantle? [authorId] => [active] => 1 [shortDescription] => In the next part of our small graphics API series, it is getting a little bit more special: we will look at the special features of AMD’s Mantle. [description] =>

Mantle is a programming interface (API) for graphics output. It was released in 2013 and was developed by AMD, originally together with the Swedish company Dice, whose PC game Battlefield 4 was the first game using Mantle. The API was designed to be an alternative to OpenGL and Direct3D (= a part of DirectX).

What distinguishes Mantle from other graphics APIs?

Mantle is a so-called low-level API. “Low-level” means that the API allows near-system programming. Developers have more control, similar to programming on gaming consoles and can use the existing hardware more effectively. This increases the performance of the CPU and graphics unit. In addition, the driver overhead (= data that is needed only for transfer or storage and is not primarily used) as well as the memory demand is reduced and multithreading is simplified.

Is Mantle cross-platform?

In parts: Mantle supports the GPUs in the PlayStation 4 and the Xbox One, but not the graphics chips of other PC hardware manufacturers such as Intel or Nvidia.

What does the future of Mantle look like?

Due to strong competition from other cross-platform APIs, AMD announced in March 2015 that Mantle would no longer be developed. Instead, DirectX12 and Vulkan, which is based on Mantle, were recommended.

[views] => 4 [displayDate] => DateTime Object ( [date] => 2019-03-11 08:15:00.000000 [timezone_type] => 3 [timezone] => Europe/Berlin ) [categoryId] => 234 [template] => [metaKeyWords] => [metaDescription] => [metaTitle] => [tags] => Array ( ) [author] => [assignedArticles] => Array ( ) [media] => Array ( [0] => Array ( [id] => 4141 [blogId] => 391 [mediaId] => 50313 [preview] => 1 [media] => Array ( [id] => 50313 [albumId] => 24 [name] => Mantle [description] => [path] => media/image/Mantle.png [type] => IMAGE [extension] => png [userId] => 56 [created] => DateTime Object ( [date] => 2019-11-07 00:00:00.000000 [timezone_type] => 3 [timezone] => Europe/Berlin ) [fileSize] => 5699678 [width] => 3000 [height] => 2000 ) ) ) [attribute] => Array ( [id] => 388 [blogId] => 391 [attribute1] => NULL [attribute2] => [attribute3] => [attribute4] => [attribute5] => [attribute6] => [digi1Inactivateblogarticle] => 0 [digi1Sponsoredpost] => 0 [digi1Featuredpost] => 0 [digi1Hideblogdetailsite] => 0 [digi1Showleftsidebarblogdetailsite] => 0 [digi1Disablecommentfunction] => 0 [digi1Hideimageslider] => 0 [digi1Relatedblogarticle1] => 381 [digi1Relatedblogarticle2] => 385 [digi1Relatedblogarticle3] => 395 [digi1Relatedblogarticle4] => [digi1Relatedblogarticle5] => [isReference] => 0 [relatedItem] => ) [comments] => Array ( ) ) 1
know-how
What is Mantle?
In the next part of our small graphics API series, it is getting a little bit more special: we will look at the special features of AMD’s Mantle.