Market Cap: $2.6911T 0.560%
Volume(24h): $89.4376B -31.280%
  • Market Cap: $2.6911T 0.560%
  • Volume(24h): $89.4376B -31.280%
  • Fear & Greed Index:
  • Market Cap: $2.6911T 0.560%
Cryptos
Topics
Cryptospedia
News
CryptosTopics
Videos
Top News
Cryptos
Topics
Cryptospedia
News
CryptosTopics
Videos
bitcoin
bitcoin

$82951.790245 USD

-0.70%

ethereum
ethereum

$1791.465527 USD

-1.83%

tether
tether

$0.999717 USD

-0.01%

xrp
xrp

$2.055970 USD

0.14%

bnb
bnb

$593.238692 USD

-1.32%

usd-coin
usd-coin

$1.000032 USD

0.02%

solana
solana

$115.381354 USD

-4.13%

dogecoin
dogecoin

$0.161732 USD

-2.67%

cardano
cardano

$0.649656 USD

-0.44%

tron
tron

$0.239261 USD

1.04%

unus-sed-leo
unus-sed-leo

$9.561241 USD

1.74%

toncoin
toncoin

$3.530703 USD

-6.73%

chainlink
chainlink

$12.739766 USD

-3.87%

stellar
stellar

$0.259841 USD

-2.48%

avalanche
avalanche

$18.093210 USD

-3.52%

Cryptocurrency News Articles

Supermicro Systems with the NVIDIA B200 Outperformed the Previous Generation of Systems by Delivering 3X the Token Generation Per Second

Apr 03, 2025 at 09:11 pm

Super Micro Computer, Inc. (SMCI), a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, is announcing first-to-market industry leading performance

Supermicro Systems with the NVIDIA B200 Outperformed the Previous Generation of Systems by Delivering 3X the Token Generation Per Second

Latest Benchmarks Show Supermicro Systems with the NVIDIA B200 Outperformed the Previous Generation of Systems with 3X the Token Generation Per Second

SAN JOSE, Calif., April 3, 2025 /PRNewswire/ -- Super Micro Computer, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, is announcing first-to-market industry leading performance on several MLPerf Inference v5.0 benchmarks, using the NVIDIA HGX™ B200 8-GPU. The 4U liquid-cooled and 10U air-cooled systems achieved the best performance in select benchmarks. Supermicro demonstrated more than 3 times the tokens per second (Token/s) generation for Llama2-70B and Llama3.1-405B benchmarks compared to H200 8-GPU systems.

"Supermicro remains a leader in the AI industry, as evidenced by the first new benchmarks from MLCommons in 2025," said Charles Liang, president and CEO of Supermicro. "Our building block architecture enables us to be first-to-market with a diverse range of systems optimized for various workloads. We continue to collaborate closely with NVIDIA to fine-tune our systems and secure a leadership position in AI workloads."

Learn more about the new MLPerf v5.0 Inference benchmarks at: https://mlcommons.org/benchmarks/inference-datacenter/

Supermicro is the only system vendor publishing record MLPerf inference performance (on select benchmarks) for both the air-cooled and liquid-cooled NVIDIA HGX™ B200 8-GPU systems. Both air-cooled and liquid-cooled systems were operational before the MLCommons benchmark start date. Supermicro engineers optimized the systems and software to showcase the impressive performance. Within the operating margin, the Supermicro air-cooled B200 system exhibited the same level of performance as the liquid-cooled B200 system. Supermicro has been delivering these systems to customers while we conducted the benchmarks.

MLCommons emphasizes that all results be reproducible, that the products are available and that the results can be audited by other MLCommons members. Supermicro engineers optimized the systems and software, as allowed by the MLCommons rules.

The SYS-421GE-NBRT-LCC (8x NVIDIA B200-SXM-180GB) and SYS-A21GE-NBRT (8x NVIDIA B200-SXM-180GB) showed performance leadership running the Mixtral 8x7B Inference, Mixture of Experts benchmarks with 129,000 tokens/second. The Supermicro air-cooled and liquid-cooled NVIDIA B200 based system delivered over 1,000 tokens/second inference for the large Llama3.1-405b model, whereas the previous generations of GPU systems have much smaller results. For smaller inferencing tasks, using the LLAMA2-70b benchmark, a Supermicro system with the NVIDIA B200 SXM-180GB installed shows the highest performance from a Tier 1 system supplier.

Specifically:

* Supermicro achieved outstanding performance on several MLPerf Inference v5.0 benchmarks with the NVIDIA HGX™ B200 8-GPU in 10U and 4U configurations.

* In select benchmarks, Supermicro's systems delivered the best performance.

* Supermicro showcased more than 3x the tokens/second generation for Llama2-70B and Llama3.1-405b benchmarks compared to H200 8-GPU systems.

"MLCommons congratulates Supermicro on their submission to the MLPerf Inference v5.0 benchmark. We are pleased to see their results showcasing significant performance gains compared to earlier generations of systems," said David Kanter, Head of MLPerf at MLCommons. "Customers will be pleased by the performance improvements achieved which are validated by the neutral, representative and reproducible MLPerf results."

Supermicro offers a comprehensive AI portfolio with over 100 GPU-optimized systems, both air-cooled and liquid-cooled options, with a choice of CPUs, ranging from single-socket optimized systems to 8-way multiprocessor systems. Supermicro rack-scale systems include computing, storage, and network components, which reduce the time required to install them once they are delivered to a customer.

Supermicro's NVIDIA HGX B200 8-GPU systems utilize next-generation liquid-cooling and air-cooling technology. The newly developed cold plates and the new 250kW coolant distribution unit (CDU) more than double the cooling capacity of the previous generation in the same 4U form factor. Available in 42U, 48U, or 52U configurations

Disclaimer:info@kdj.com

The information provided is not trading advice. kdj.com does not assume any responsibility for any investments made based on the information provided in this article. Cryptocurrencies are highly volatile and it is highly recommended that you invest with caution after thorough research!

If you believe that the content used on this website infringes your copyright, please contact us immediately (info@kdj.com) and we will delete it promptly.

Other articles published on Apr 04, 2025