The renowned laboratory for artificial intelligence research OpenAI two years ago presented Triton, a specialized programming language in a scientific article. The language should allow developers to easily create high-performance machine learning algorithms, sasy KO.
This week, OpenAI released an updated version of the language, called Triton 1.0, to its GitHub repository. It is suitable for enterprise machine learning projects and performs many optimizations of AI code automatically, saving developers time.
The vast majority of enterprise AI models run on Nvidia GPUs and are built using Nvidia’s CUDA software. This framework provides basic programming blocks for performing AI computing using the GPU, notes NIXsolutions.
However, leveraging CUDA to maximize AI performance requires complex and detailed code optimizations that are difficult for even experienced developers to implement.
OpenAI solves this problem with Triton. The relative simplicity of the new language will allow software development teams to create more efficient algorithms even without extensive CUDA programming experience.
Triton improves AI performance by optimizing three main steps in the machine learning process:
- Move data between DRAM and GPU SRAM. The faster they can be transferred between these two memory components, the faster machine learning algorithms work. Triton automatically aggregates data moved from DRAM to SRAM into larger blocks, thereby saving developers time.
- Allocation of incoming data blocks to SRAM segments so that they can be analyzed as quickly as possible. Triton here reduces the likelihood of so-called conflicts of memory banks – an attempt by two programs to write data to the same memory segment. These conflicts delay computations until they are resolved, slowing down AI algorithms.
- The third and final task, which Triton helps in part to automate, is the distribution of computations across multiple CUDA cores for simultaneous parallel execution.