bitcoin
bitcoin

$97608.73 USD 

0.85%

ethereum
ethereum

$3298.36 USD 

-0.97%

tether
tether

$1.00 USD 

0.04%

solana
solana

$253.68 USD 

1.03%

bnb
bnb

$619.54 USD 

-0.76%

xrp
xrp

$1.44 USD 

27.65%

dogecoin
dogecoin

$0.395335 USD 

2.25%

usd-coin
usd-coin

$0.999860 USD 

-0.01%

cardano
cardano

$0.897222 USD 

13.84%

tron
tron

$0.198301 USD 

-0.67%

avalanche
avalanche

$38.25 USD 

8.21%

shiba-inu
shiba-inu

$0.000024 USD 

-0.94%

toncoin
toncoin

$5.44 USD 

-1.31%

sui
sui

$3.49 USD 

-3.08%

chainlink
chainlink

$15.02 USD 

-0.79%

加密貨幣新聞文章

NVIDIA 在日本和台灣推出用於生成式 AI 的 NIM 微服務

2024/08/27 11:04

Alvin Lang 2024 年 8 月 27 日 02:52 NVIDIA 推出 NIM 微服務,支援日本和台灣的生成式 AI,增強區域語言模式和本地 AI 應用。

NVIDIA 在日本和台灣推出用於生成式 AI 的 NIM 微服務

NVIDIA has introduced its NIM microservices for generative AI applications in Japan and Taiwan, aiming to bolster regional language models and support the development of本土化generative AI applications.

NVIDIA 在日本和台灣推出了針對生成式 AI 應用的 NIM 微服務,旨在支援區域語言模型並支援本土化生成式 AI 應用的開發。

Announced in an NVIDIA blog post on Saturday, the new microservices are designed to help developers build and deploy generative AI applications that are sensitive to local languages and cultural nuances. The microservices support popular community models, enhancing user interactions through improved understanding and responses based on regional languages and cultural heritage.

NVIDIA 在周六的部落格文章中宣布,新的微服務旨在幫助開發人員建立和部署對當地語言和文化差異敏感的生成式 AI 應用程式。微服務支援流行的社群模型,透過改善基於區域語言和文化遺產的理解和回應來增強使用者互動。

According to ABI Research, generative AI software revenue in the Asia-Pacific region is projected to reach $48 billion by 2030, up from $5 billion in 2024. NVIDIA's new microservices are expected to play a significant role in this growth by providing advanced tools for AI development.

據ABI Research 稱,預計到2030 年,亞太地區的生成式AI 軟體收入將從2024 年的50 億美元增至480 億美元。中發揮重要作用發展。

Among the new offerings are the Llama-3-Swallow-70B and Llama-3-Taiwan-70B models, trained on Japanese and Mandarin data respectively. These models are designed to provide a deeper understanding of local laws, regulations, and customs.

新產品包括 Llama-3-Swallow-70B 和 Llama-3-Taiwan-70B 模型,分別使用日語和普通話資料進行訓練。這些模型旨在幫助人們更深入地了解當地法律、法規和習俗。

The RakutenAI 7B family of models, built on Mistral-7B, were trained on English and Japanese datasets and are available as NIM microservices for Chat and Instruct functionalities. These models achieved leading average scores among open Japanese large language models in the LM Evaluation Harness benchmark from January to March 2024.

RakutenAI 7B 系列模型基於 Mistral-7B 構建,接受英語和日語資料集的訓練,並可作為聊天和指導功能的 NIM 微服務使用。這些模型在 2024 年 1 月至 3 月的 LM Evaluation Harness 基準測試中取得了開放日語大語言模型中領先的平均分數。

Several organizations in Japan and Taiwan are already using NVIDIA's NIM microservices to develop and deploy generative AI applications.

日本和台灣的多個組織已經在使用 NVIDIA 的 NIM 微服務來開發和部署生成式 AI 應用程式。

For instance, the Tokyo Institute of Technology has fine-tuned the Llama-3-Swallow 70B model using Japanese-language data. Preferred Networks, a Japanese AI company, is using the model to develop a healthcare-specific AI trained on Japanese medical data, achieving top scores on the Japan National Examination for Physicians.

例如,東京工業大學使用日文資料對 Llama-3-Swallow 70B 模型進行了微調。日本人工智慧公司 Preferred Networks 正在使用該模型開發一種針對日本醫療資料進行訓練的醫療保健專用人工智慧,該人工智慧在日本國家醫師考試中取得了最高分。

In Taiwan, Chang Gung Memorial Hospital is building a custom AI Inference Service to centrally host LLM applications within the hospital system, using the Llama-3-Taiwan 70B model to improve medical communication. Pegatron, a Taiwan-based electronics manufacturer, is adopting the model for both internal and external applications, integrating it with its PEGAAi Agentic AI System to boost efficiency in manufacturing and operations.

在台灣,長庚紀念醫院正在建立客製化的人工智慧推理服務,以在醫院系統內集中託管法學碩士應用程序,並使用 Llama-3-Taiwan 70B 模型來改善醫療溝通。台灣電子製造商和碩正在內部和外部應用中採用該模型,並將其與其 PEAAi Agentic AI 系統集成,以提高製造和營運效率。

Developers can now deploy these sovereign AI models, packaged as NIM microservices, into production at scale while achieving improved performance. The microservices, available with NVIDIA AI Enterprise, are optimized for inference with the NVIDIA TensorRT-LLM open-source library, providing up to 5x higher throughput and lowering the total cost of running the models in production.

開發人員現在可以將這些主權 AI 模型打包為 NIM 微服務,大規模部署到生產中,同時提高效能。 NVIDIA AI Enterprise 提供的微服務針對 NVIDIA TensorRT-LLM 開源程式庫的推理進行了最佳化,吞吐量提高了 5 倍,並降低了生產中運行模型的總成本。

The new NIM microservices are available today as hosted application programming interfaces (APIs).

新的 NIM 微服務現已作為託管應用程式介面 (API) 提供。

To learn more about how NVIDIA NIM can accelerate generative AI outcomes, visit the product page here.

要了解有關 NVIDIA NIM 如何加速生成 AI 結果的更多信息,請訪問此處的產品頁面。

Generative AI models, such as LLMs, have gained popularity for their ability to perform various tasks, including generating text, code, images, and videos. However, deploying these models can be challenging, especially for organizations that require fast and accurate results.

法學碩士等生成式人工智慧模型因其執行各種任務(包括生成文字、程式碼、圖像和影片)的能力而受到歡迎。然而,部署這些模型可能具有挑戰性,特別是對於需要快速、準確結果的組織而言。

To address this need, NVIDIA offers a range of solutions, including the NVIDIA AI Enterprise software platform and the NVIDIA AI Registry, that provide security, performance optimization, and centralized management for generative AI models.

為了滿足這項需求,NVIDIA 提供了一系列解決方案,包括 NVIDIA AI Enterprise 軟體平台和 NVIDIA AI Registry,為生成式 AI 模型提供安全性、效能最佳化和集中管理。

With these solutions, organizations can deploy models quickly and efficiently, ensuring optimal performance and reliability for their applications.

借助這些解決方案,組織可以快速且有效率地部署模型,確保其應用程式的最佳效能和可靠性。

新聞來源:blockchain.news

免責聲明:info@kdj.com

所提供的資訊並非交易建議。 kDJ.com對任何基於本文提供的資訊進行的投資不承擔任何責任。加密貨幣波動性較大,建議您充分研究後謹慎投資!

如果您認為本網站使用的內容侵犯了您的版權,請立即聯絡我們(info@kdj.com),我們將及時刪除。

2024年11月22日 其他文章發表於