Market Cap: $3.5182T -5.710%
Volume(24h): $273.6536B -30.310%
  • Market Cap: $3.5182T -5.710%
  • Volume(24h): $273.6536B -30.310%
  • Fear & Greed Index:
  • Market Cap: $3.5182T -5.710%
Cryptos
Topics
Cryptospedia
News
CryptosTopics
Videos
Top News
Cryptos
Topics
Cryptospedia
News
CryptosTopics
Videos
bitcoin
bitcoin

$101955.948589 USD

-5.77%

ethereum
ethereum

$3240.290540 USD

-5.16%

xrp
xrp

$3.047708 USD

-4.22%

tether
tether

$0.998785 USD

0.05%

solana
solana

$236.757836 USD

-8.37%

bnb
bnb

$679.662946 USD

-3.34%

dogecoin
dogecoin

$0.340845 USD

-9.87%

usd-coin
usd-coin

$1.000086 USD

0.01%

cardano
cardano

$0.973881 USD

-8.36%

tron
tron

$0.238271 USD

-0.55%

chainlink
chainlink

$24.088213 USD

-7.00%

avalanche
avalanche

$35.090742 USD

-7.85%

stellar
stellar

$0.432208 USD

-6.63%

sui
sui

$4.304171 USD

-8.81%

hedera
hedera

$0.329054 USD

-7.24%

Cryptocurrency News Articles

NVIDIA Introduces NIM Microservices for Generative AI in Japan and Taiwan

Aug 27, 2024 at 11:04 am

Alvin Lang Aug 27, 2024 02:52 NVIDIA launches NIM microservices to support generative AI in Japan and Taiwan, enhancing regional language models and local AI applications.

NVIDIA Introduces NIM Microservices for Generative AI in Japan and Taiwan

NVIDIA has introduced its NIM microservices for generative AI applications in Japan and Taiwan, aiming to bolster regional language models and support the development of本土化generative AI applications.

Announced in an NVIDIA blog post on Saturday, the new microservices are designed to help developers build and deploy generative AI applications that are sensitive to local languages and cultural nuances. The microservices support popular community models, enhancing user interactions through improved understanding and responses based on regional languages and cultural heritage.

According to ABI Research, generative AI software revenue in the Asia-Pacific region is projected to reach $48 billion by 2030, up from $5 billion in 2024. NVIDIA's new microservices are expected to play a significant role in this growth by providing advanced tools for AI development.

Among the new offerings are the Llama-3-Swallow-70B and Llama-3-Taiwan-70B models, trained on Japanese and Mandarin data respectively. These models are designed to provide a deeper understanding of local laws, regulations, and customs.

The RakutenAI 7B family of models, built on Mistral-7B, were trained on English and Japanese datasets and are available as NIM microservices for Chat and Instruct functionalities. These models achieved leading average scores among open Japanese large language models in the LM Evaluation Harness benchmark from January to March 2024.

Several organizations in Japan and Taiwan are already using NVIDIA's NIM microservices to develop and deploy generative AI applications.

For instance, the Tokyo Institute of Technology has fine-tuned the Llama-3-Swallow 70B model using Japanese-language data. Preferred Networks, a Japanese AI company, is using the model to develop a healthcare-specific AI trained on Japanese medical data, achieving top scores on the Japan National Examination for Physicians.

In Taiwan, Chang Gung Memorial Hospital is building a custom AI Inference Service to centrally host LLM applications within the hospital system, using the Llama-3-Taiwan 70B model to improve medical communication. Pegatron, a Taiwan-based electronics manufacturer, is adopting the model for both internal and external applications, integrating it with its PEGAAi Agentic AI System to boost efficiency in manufacturing and operations.

Developers can now deploy these sovereign AI models, packaged as NIM microservices, into production at scale while achieving improved performance. The microservices, available with NVIDIA AI Enterprise, are optimized for inference with the NVIDIA TensorRT-LLM open-source library, providing up to 5x higher throughput and lowering the total cost of running the models in production.

The new NIM microservices are available today as hosted application programming interfaces (APIs).

To learn more about how NVIDIA NIM can accelerate generative AI outcomes, visit the product page here.

Generative AI models, such as LLMs, have gained popularity for their ability to perform various tasks, including generating text, code, images, and videos. However, deploying these models can be challenging, especially for organizations that require fast and accurate results.

To address this need, NVIDIA offers a range of solutions, including the NVIDIA AI Enterprise software platform and the NVIDIA AI Registry, that provide security, performance optimization, and centralized management for generative AI models.

With these solutions, organizations can deploy models quickly and efficiently, ensuring optimal performance and reliability for their applications.

Disclaimer:info@kdj.com

The information provided is not trading advice. kdj.com does not assume any responsibility for any investments made based on the information provided in this article. Cryptocurrencies are highly volatile and it is highly recommended that you invest with caution after thorough research!

If you believe that the content used on this website infringes your copyright, please contact us immediately (info@kdj.com) and we will delete it promptly.

Other articles published on Jan 21, 2025