NVIDIA AI foundry introduces custom Llama 3.0 Generative AI models for Enterprises

Ted Hisokawa Jul 23, 2024 18:50

NVIDIA releases AI Foundry and NIM Microservices. This allows enterprises to create Llama 3.0 generative AI models tailored for their industry.

NVIDIA announced the launch its NVIDIA AI Foundry and NVIDIA NIM (tm) microservices to revolutionize generative AI capabilities in enterprises around the world. Llama's 3.1 collection is available for free and was introduced as a way to give businesses advanced AI tools.

Custom AI Solutions for Enterprises

The NVIDIA AI foundry allows enterprises and countries to create bespoke "supermodels" tailored to specific industry needs, using Llama 3.0 and NVIDIA technology. These models can be built using proprietary and synthetic data from Llama 3 1 405B, and the NVIDIA Nemotron (tm) reward model.

The AI Foundry uses the NVIDIA DGX Cloud AI Platform, which was co-engineered by leading public cloud providers. It offers scalable computing resources to meet changing AI needs. This service is designed to help enterprises and countries develop sovereign AI strategies, and large language models for domain-specific applications.

Key Industry Adoption

Accenture was the first company to use NVIDIA AI foundry to build custom Llama models for their clients. Aramco and Uber, among others, are early adopters of Llama NVIDIA NIM Microservices. This shows a high level of interest from companies across industries.

Jensen Huang is the founder and CEO of NVIDIA. He said: "Meta's Llama 3.0 models are now available for free, marking a significant moment in the adoption of generative artificial intelligence within enterprises around the globe." "Llama 3.0 opens up the floodgates to every enterprise or industry building state-of-the art generative AI applications. NVIDIA AI foundry has Llama 3.0 integrated throughout, and is ready to assist enterprises in building and deploying custom Llama Supermodels.

READ  Tencent unveils top ten tech trends for 2024

Enhance AI Capabilities

Downloadable NVIDIA NIM microservices are now available to download for Llama 3.0. They promise up to 2.5x faster throughput than traditional inference methods. These microservices can be paired with the new NVIDIA NeMo NIM Retriever NIM services to create advanced AI retrieval for digital assistants or human avatars.

Accenture is the first company to use NVIDIA AI Foundry with its AI Refinery (tm) framework for developing custom Llama 3.0 models. Accenture's chair and CEO Julie Sweet said that the world's largest enterprises are aware of how generative AI transforms every industry, and they want to use custom models in their applications. "Accenture uses NVIDIA NIM microservices to build our internal AI applications. Now, with NVIDIA AI foundry, we are able to help clients create and deploy Llama 3.1 custom models that will power their transformative AI applications.

Comprehensive AI Model Services

NVIDIA AI Foundry is an end-toend service, which includes model curation and retrieval. It also offers fine-tuning. Llama models and NVIDIA NeMo platforms can be used by enterprises to create domain-specific model, with the option of generating synthetic data to improve accuracy.

NVIDIA has teamed up with Meta to create a distillation formula for Llama 3.1. This allows developers to create smaller custom models that are suitable for a variety of infrastructures, including AI workstations and laptops.

Over 16,000 NVIDIA Tensor Core GPUs have been used to train NVIDIA NIM Microservices in Llama 3.0 by leading companies from healthcare, energy and financial services.

Future Prospects

NVIDIA AI enterprise offers production support for Llama NIM and NeMo Retriever NIM Microservices. NVIDIA Developers will also soon be able to access NIM microservices free of charge for testing, research, and development.

READ  Bitcoin dominates the market as crypto assets surge with record $2.9B inflows

Visit the NVIDIA newsroom for more information.



Image source: Shutterstock