市值: $2.7436T 1.890%
體積(24小時): $75.5531B 5.030%
  • 市值: $2.7436T 1.890%
  • 體積(24小時): $75.5531B 5.030%
  • 恐懼與貪婪指數:
  • 市值: $2.7436T 1.890%
Cryptos
主題
Cryptospedia
資訊
CryptosTopics
影片
Top News
Cryptos
主題
Cryptospedia
資訊
CryptosTopics
影片
bitcoin
bitcoin

$83486.942804 USD

0.51%

ethereum
ethereum

$1942.951501 USD

1.96%

tether
tether

$1.000040 USD

-0.01%

xrp
xrp

$2.311790 USD

1.03%

bnb
bnb

$615.076581 USD

-3.89%

solana
solana

$126.406699 USD

0.83%

usd-coin
usd-coin

$1.000150 USD

0.03%

cardano
cardano

$0.715061 USD

0.83%

dogecoin
dogecoin

$0.167881 USD

-0.10%

tron
tron

$0.229729 USD

2.10%

chainlink
chainlink

$14.028689 USD

-1.06%

unus-sed-leo
unus-sed-leo

$9.781092 USD

-0.41%

toncoin
toncoin

$3.586497 USD

1.25%

stellar
stellar

$0.277540 USD

2.47%

hedera
hedera

$0.188848 USD

0.32%

加密貨幣新聞文章

AMD Ryzen AI Max+ 395(代號:“ Strix Halo”)是功能最強大的X86 APU,可提供顯著的性能提升

2025/03/19 13:12

AMD Ryzen AI Max+ 395(代號:“ Strix Halo”)是最強大的X86 APU,在比賽中提供了顯著的性能提升。

AMD Ryzen AI Max+ 395(代號:“ Strix Halo”)是功能最強大的X86 APU,可提供顯著的性能提升

The AMD Ryzen AI MAX+ 395 (codenamed "Strix Halo") is the most powerful x86 APU and delivers a significant performance boost over the competition. Powered by "Zen 5" CPU cores, 50+ peak AI TOPS XDNA 2 NPU and a truly massive integrated GPU driven by 40 AMD RDNA 3.5 CUs, the Ryzen AI MAX+ 395 is a transformative upgrade for the premium thin and light form factor. The Ryzen AI MAX+ 395 is available in options ranging from 32GB all the way up to 128GB of unified memory - out of which up to 96GB can be converted to VRAM through AMD Variable Graphics Memory.

AMD Ryzen AI Max+ 395(代號為“ Strix Halo”)是最強大的X86 APU,在比賽中提供了顯著的性能。 Ryzen AI Max+ 395由“ Zen 5” CPU核心,50+峰AI頂部XDNA 2 NPU和由40 AMD RDNA 3.5 CUS驅動的真正巨大的集成GPU,是用於優質薄和輕型的薄和光形式的變革性升級。 Ryzen AI Max+ 395可在從32GB到128GB的統一內存的選項中獲得,其中最多可以通過AMD變量圖形存儲器將其轉換為VRAM。

The Ryzen AI Max+ 395 excels in consumer AI workloads like the llama.cpp-powered application: LM Studio. Shaping up to be the must-have app for client LLM workloads, LM Studio allows users to locally run the latest language model without any technical knowledge required. Deploying new AI text and vision models on Day 1 has never been simpler.

Ryzen AI Max+ 395在消費者AI工作負載中出色。 LM Studio將成為客戶llm工作負載的必備應用程序,使用戶無需任何技術知識即可本地運行最新的語言模型。在第1天部署新的AI文本和視覺模型從未如此簡單。

The "Strix Halo" platform extends AMD performance leadership in LM Studio with the new AMD Ryzen AI MAX+ series of processors.

“ Strix Halo”平台通過新的AMD Ryzen AI Max+系列處理器擴展了LM Studio的AMD性能領導力。

As a primer: the model size is dictated by the number of parameters and the precision used. Generally speaking, doubling the parameter count (on the same architecture) or doubling the precision will also double the size of the model. Most of our competitor's current-generation offerings in this space max out at 32GB on-package memory. This is enough shared graphics memory to run large language models (roughly) up to 16GB in size.

作為底漆:模型大小由參數數量和所使用的精度決定。一般而言,將參數計數(在同一體系結構上)加倍或將精度加倍也將增加一倍。我們的大多數競爭對手在此空間中的當前代產品最大化為32GB包裝內存。這是足夠的共享圖形內存,可以運行大型大小的大型語言模型。

Benchmarking text and vision language models in LM Studio

LM Studio中的文本和視覺語言模型的基準測試

For this comparison, we will be using the ASUS ROG Flow Z13 with 64GB of unified memory. We will restrict the LLM size to models that fit inside 16GB to ensure that it runs on the competitor's 32GB laptop.

為了進行此比較,我們將使用Asus Rog Flow Z13和64GB的統一存儲器。我們將將LLM尺寸限制在適合16GB內部的型號中,以確保其在競爭對手的32GB筆記本電腦上運行。

From the results, we can see that the ASUS ROG Flow Z13 - powered by the integrated Radeon 8060S and taking full advantage of the 256 GB/s bandwidth - effortlessly achieves up to 2.2x the performance of the Intel Arc 140V in token throughput.

從結果來看,我們可以看到,由集成的Radeon 8060s提供動力,並充分利用256 GB/s帶寬 - 毫不費力地達到了Intel Arc 140V在令牌吞吐量中的性能。

The performance uplift is very consistent across different model types (whether you are running chain-of-thought DeepSeek R1 Distills or standard models like Microsoft Phi 4) and different parameter sizes.

性能提昇在不同的模型類型中非常一致(無論您是運行經過深思熟慮的DeepSeek R1蒸餾鏈還是Microsoft Phi 4)和不同的參數尺寸。

In time to first token benchmarks, the AMD Ryzen AI MAX+ 395 processor is up to 4x faster than the competitor in smaller models like Llama 3.2 3b Instruct.

在第一個令牌基準的及時,AMD Ryzen AI Max+ 395處理器的速度比Llama 3.2 3B指令等較小型號的競爭對手快4倍。

Going up to 7 billion and 8 billion models like the DeepSeek R1 Distill Qwen 7b and DeepSeek R1 Distill Llama 8b, the Ryzen AI Max+ 395 is up to 9.1x faster. When looking at 14 billion parameter models (which is approaching the largest size that can comfortably fit on a standard 32GB laptop), the ASUS ROG Flow Z13 is up to 12.2x faster than the Intel Core Ultra 258V powered laptop - more than an order of magnitude faster than the competition!

Ryzen AI Max+ 395升至70億和80億型號,例如DeepSeek R1 Distill Qwen 7b和DeepSeek R1 Distill Llama 8B,最高9.1倍。當查看140億個參數型號(即接近最大的尺寸,可以舒適地適合標準的32GB筆記本電腦)時,Asus Rog Flow Z13的速度比Intel Core Ultra 258V驅動的筆記本電腦快12.2倍 - 比競爭速度快得多!

The larger the LLM, the faster AMD Ryzen AI Max+ 395 processor is in responding to the user query. So whether you are having a conversation with the model or giving it large summarization tasks involving thousands of tokens - the AMD machine will be much faster to respond. This advantage scales with the prompt length - so the heavier the task - the more pronounced the advantage will be.

LLM越大,更快的AMD Ryzen AI Max+ 395處理器正在響應用戶查詢。因此,無論您是與模型進行對話,還是給它涉及數千個令牌的大量摘要任務 - AMD機器的響應速度都會更快。此優勢以及時的長度擴展 - 因此,任務越重 - 優勢就會越明顯。

Text-only LLMs are also slowly getting replaced with highly capable multi-modal models that have vision adapters and visual reasoning capabilities. The IBM Granite Vision is one example and the recently launched Google Gemma 3 family of models is another - with both providing highly capable vision capabilities to next generation AMD AI PCs. Both of these models run incredibly performantly on an AMD Ryzen AI MAX+ 395 processor.

僅文本LLM也逐漸被具有視覺適配器和視覺推理功能的高功能多模型模型所取代。 IBM Granite Vision是一個例子,最近推出的Google Gemma 3模型家族是另一個例子 - 兩者都為下一代AMD AI PC提供了高功能的視覺功能。這兩個模型都在AMD Ryzen AI Max+ 395處理器上表現出色。

An interesting point to note here: when running vision models, the time to first token metric also effectively becomes the time it takes for the model to analyze the image you give it.

這裡要注意的一個有趣的點是:運行視覺模型時,首先標記度量的時間也有效地成為了模型分析您給出的圖像所需的時間。

The Ryzen AI Max+ 395 processor is up to 7x faster in IBM Granite Vision 3.2 3b, up to 4.6x faster in Google Gemma 3 4b and up to 6x faster in Google Gemma 3 12b. The ASUS ROG Flow Z13 came with a 64GB memory option so it can also effortlessly run the Google Gemma 3 27B Vision model - which is currently considered the current SOTA (state of

Ryzen AI Max+ 395處理器在IBM Granite Vision 3.2 3B中的速度快7倍,在Google Gemma 3 4B中,最高4.6倍,在Google Gemma 3 12b中速度快6倍。華碩ROG Flow Z13帶有64GB的內存選項

免責聲明:info@kdj.com

所提供的資訊並非交易建議。 kDJ.com對任何基於本文提供的資訊進行的投資不承擔任何責任。加密貨幣波動性較大,建議您充分研究後謹慎投資!

如果您認為本網站使用的內容侵犯了您的版權,請立即聯絡我們(info@kdj.com),我們將及時刪除。

2025年03月19日 其他文章發表於