bitcoin
bitcoin

$97355.77 USD 

-0.29%

ethereum
ethereum

$3294.38 USD 

-2.00%

tether
tether

$1.00 USD 

0.02%

solana
solana

$254.61 USD 

3.88%

bnb
bnb

$619.56 USD 

-0.93%

xrp
xrp

$1.43 USD 

26.22%

dogecoin
dogecoin

$0.397649 USD 

2.27%

usd-coin
usd-coin

$0.999840 USD 

-0.02%

cardano
cardano

$0.898472 USD 

13.47%

tron
tron

$0.198334 USD 

-0.87%

avalanche
avalanche

$38.55 USD 

9.15%

shiba-inu
shiba-inu

$0.000025 USD 

-1.02%

toncoin
toncoin

$5.44 USD 

-1.48%

sui
sui

$3.50 USD 

-2.15%

chainlink
chainlink

$15.08 USD 

-0.63%

加密货币新闻

NVIDIA 在日本和台湾推出用于生成式 AI 的 NIM 微服务

2024/08/27 11:04

Alvin Lang 2024 年 8 月 27 日 02:52 NVIDIA 推出 NIM 微服务,支持日本和台湾的生成式 AI,增强区域语言模型和本地 AI 应用。

NVIDIA 在日本和台湾推出用于生成式 AI 的 NIM 微服务

NVIDIA has introduced its NIM microservices for generative AI applications in Japan and Taiwan, aiming to bolster regional language models and support the development of本土化generative AI applications.

NVIDIA 在日本和台湾推出了针对生成式 AI 应用的 NIM 微服务,旨在支持区域语言模型并支持本土化生成式 AI 应用的开发。

Announced in an NVIDIA blog post on Saturday, the new microservices are designed to help developers build and deploy generative AI applications that are sensitive to local languages and cultural nuances. The microservices support popular community models, enhancing user interactions through improved understanding and responses based on regional languages and cultural heritage.

NVIDIA 在周六的博客文章中宣布,新的微服务旨在帮助开发人员构建和部署对当地语言和文化差异敏感的生成式 AI 应用程序。微服务支持流行的社区模型,通过改善基于区域语言和文化遗产的理解和响应来增强用户交互。

According to ABI Research, generative AI software revenue in the Asia-Pacific region is projected to reach $48 billion by 2030, up from $5 billion in 2024. NVIDIA's new microservices are expected to play a significant role in this growth by providing advanced tools for AI development.

据 ABI Research 称,预计到 2030 年,亚太地区的生成式 AI 软件收入将从 2024 年的 50 亿美元增至 480 亿美元。NVIDIA 的全新微服务预计将通过为 AI 提供先进工具,在这一增长中发挥重要作用发展。

Among the new offerings are the Llama-3-Swallow-70B and Llama-3-Taiwan-70B models, trained on Japanese and Mandarin data respectively. These models are designed to provide a deeper understanding of local laws, regulations, and customs.

新产品包括 Llama-3-Swallow-70B 和 Llama-3-Taiwan-70B 模型,分别使用日语和普通话数据进行训练。这些模型旨在帮助人们更深入地了解当地法律、法规和习俗。

The RakutenAI 7B family of models, built on Mistral-7B, were trained on English and Japanese datasets and are available as NIM microservices for Chat and Instruct functionalities. These models achieved leading average scores among open Japanese large language models in the LM Evaluation Harness benchmark from January to March 2024.

RakutenAI 7B 系列模型基于 Mistral-7B 构建,接受英语和日语数据集的训练,并可作为聊天和指导功能的 NIM 微服务使用。这些模型在 2024 年 1 月至 3 月的 LM Evaluation Harness 基准测试中取得了开放日语大语言模型中领先的平均分数。

Several organizations in Japan and Taiwan are already using NVIDIA's NIM microservices to develop and deploy generative AI applications.

日本和台湾的多个组织已经在使用 NVIDIA 的 NIM 微服务来开发和部署生成式 AI 应用程序。

For instance, the Tokyo Institute of Technology has fine-tuned the Llama-3-Swallow 70B model using Japanese-language data. Preferred Networks, a Japanese AI company, is using the model to develop a healthcare-specific AI trained on Japanese medical data, achieving top scores on the Japan National Examination for Physicians.

例如,东京工业大学使用日语数据对 Llama-3-Swallow 70B 模型进行了微调。日本人工智能公司 Preferred Networks 正在使用该模型开发一种针对日本医疗数据进行训练的医疗保健专用人工智能,该人工智能在日本国家医师考试中取得了最高分。

In Taiwan, Chang Gung Memorial Hospital is building a custom AI Inference Service to centrally host LLM applications within the hospital system, using the Llama-3-Taiwan 70B model to improve medical communication. Pegatron, a Taiwan-based electronics manufacturer, is adopting the model for both internal and external applications, integrating it with its PEGAAi Agentic AI System to boost efficiency in manufacturing and operations.

在台湾,长庚纪念医院正在构建定制的人工智能推理服务,以在医院系统内集中托管法学硕士应用程序,并使用 Llama-3-Taiwan 70B 模型来改善医疗沟通。台湾电子制造商和硕正在内部和外部应用中采用该模型,并将其与其 PEAAi Agentic AI 系统集成,以提高制造和运营效率。

Developers can now deploy these sovereign AI models, packaged as NIM microservices, into production at scale while achieving improved performance. The microservices, available with NVIDIA AI Enterprise, are optimized for inference with the NVIDIA TensorRT-LLM open-source library, providing up to 5x higher throughput and lowering the total cost of running the models in production.

开发人员现在可以将这些主权 AI 模型打包为 NIM 微服务,大规模部署到生产中,同时提高性能。 NVIDIA AI Enterprise 提供的微服务针对 NVIDIA TensorRT-LLM 开源库的推理进行了优化,吞吐量提高了 5 倍,并降低了生产中运行模型的总成本。

The new NIM microservices are available today as hosted application programming interfaces (APIs).

新的 NIM 微服务现已作为托管应用程序编程接口 (API) 提供。

To learn more about how NVIDIA NIM can accelerate generative AI outcomes, visit the product page here.

要了解有关 NVIDIA NIM 如何加速生成 AI 结果的更多信息,请访问此处的产品页面。

Generative AI models, such as LLMs, have gained popularity for their ability to perform various tasks, including generating text, code, images, and videos. However, deploying these models can be challenging, especially for organizations that require fast and accurate results.

法学硕士等生成式人工智能模型因其执行各种任务(包括生成文本、代码、图像和视频)的能力而受到欢迎。然而,部署这些模型可能具有挑战性,特别是对于需要快速、准确结果的组织而言。

To address this need, NVIDIA offers a range of solutions, including the NVIDIA AI Enterprise software platform and the NVIDIA AI Registry, that provide security, performance optimization, and centralized management for generative AI models.

为了满足这一需求,NVIDIA 提供了一系列解决方案,包括 NVIDIA AI Enterprise 软件平台和 NVIDIA AI Registry,为生成式 AI 模型提供安全性、性能优化和集中管理。

With these solutions, organizations can deploy models quickly and efficiently, ensuring optimal performance and reliability for their applications.

借助这些解决方案,组织可以快速高效地部署模型,确保其应用程序的最佳性能和可靠性。

新闻来源:blockchain.news

免责声明:info@kdj.com

所提供的信息并非交易建议。根据本文提供的信息进行的任何投资,kdj.com不承担任何责任。加密货币具有高波动性,强烈建议您深入研究后,谨慎投资!

如您认为本网站上使用的内容侵犯了您的版权,请立即联系我们(info@kdj.com),我们将及时删除。

2024年11月22日 发表的其他文章