![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
AMD Ryzen AI Max+ 395(代号:“ Strix Halo”)是最强大的X86 APU,在比赛中提供了显着的性能提升。
The AMD Ryzen AI MAX+ 395 (codenamed "Strix Halo") is the most powerful x86 APU and delivers a significant performance boost over the competition. Powered by "Zen 5" CPU cores, 50+ peak AI TOPS XDNA 2 NPU and a truly massive integrated GPU driven by 40 AMD RDNA 3.5 CUs, the Ryzen AI MAX+ 395 is a transformative upgrade for the premium thin and light form factor. The Ryzen AI MAX+ 395 is available in options ranging from 32GB all the way up to 128GB of unified memory - out of which up to 96GB can be converted to VRAM through AMD Variable Graphics Memory.
AMD Ryzen AI Max+ 395(代号为“ Strix Halo”)是最强大的X86 APU,在比赛中提供了显着的性能。 Ryzen AI Max+ 395由“ Zen 5” CPU核心,50+峰AI顶部XDNA 2 NPU和由40 AMD RDNA 3.5 CUS驱动的真正巨大的集成GPU,是用于优质薄和轻型的薄和光形式的变革性升级。 Ryzen AI Max+ 395可在从32GB到128GB的统一内存的选项中获得,其中最多可以通过AMD变量图形存储器将其转换为VRAM。
The Ryzen AI Max+ 395 excels in consumer AI workloads like the llama.cpp-powered application: LM Studio. Shaping up to be the must-have app for client LLM workloads, LM Studio allows users to locally run the latest language model without any technical knowledge required. Deploying new AI text and vision models on Day 1 has never been simpler.
Ryzen AI Max+ 395在消费者AI工作负载中出色。 LM Studio将成为客户llm工作负载的必备应用程序,使用户无需任何技术知识即可本地运行最新的语言模型。在第1天部署新的AI文本和视觉模型从未如此简单。
The "Strix Halo" platform extends AMD performance leadership in LM Studio with the new AMD Ryzen AI MAX+ series of processors.
“ Strix Halo”平台通过新的AMD Ryzen AI Max+系列处理器扩展了LM Studio的AMD性能领导力。
As a primer: the model size is dictated by the number of parameters and the precision used. Generally speaking, doubling the parameter count (on the same architecture) or doubling the precision will also double the size of the model. Most of our competitor's current-generation offerings in this space max out at 32GB on-package memory. This is enough shared graphics memory to run large language models (roughly) up to 16GB in size.
作为底漆:模型大小由参数数量和所使用的精度决定。一般而言,将参数计数(在同一体系结构上)加倍或将精度加倍也将增加一倍。我们的大多数竞争对手在此空间中的当前代产品最大化为32GB包装内存。这是足够的共享图形内存,可以运行大型大小的大型语言模型。
Benchmarking text and vision language models in LM Studio
LM Studio中的文本和视觉语言模型的基准测试
For this comparison, we will be using the ASUS ROG Flow Z13 with 64GB of unified memory. We will restrict the LLM size to models that fit inside 16GB to ensure that it runs on the competitor's 32GB laptop.
为了进行此比较,我们将使用Asus Rog Flow Z13和64GB的统一存储器。我们将将LLM尺寸限制在适合16GB内部的型号中,以确保其在竞争对手的32GB笔记本电脑上运行。
From the results, we can see that the ASUS ROG Flow Z13 - powered by the integrated Radeon 8060S and taking full advantage of the 256 GB/s bandwidth - effortlessly achieves up to 2.2x the performance of the Intel Arc 140V in token throughput.
从结果来看,我们可以看到,由集成的Radeon 8060s提供动力,并充分利用256 GB/s带宽 - 毫不费力地达到了Intel Arc 140V在令牌吞吐量中的性能。
The performance uplift is very consistent across different model types (whether you are running chain-of-thought DeepSeek R1 Distills or standard models like Microsoft Phi 4) and different parameter sizes.
性能提升在不同的模型类型中非常一致(无论您是运行经过深思熟虑的DeepSeek R1蒸馏链还是Microsoft Phi 4)和不同的参数尺寸。
In time to first token benchmarks, the AMD Ryzen AI MAX+ 395 processor is up to 4x faster than the competitor in smaller models like Llama 3.2 3b Instruct.
在第一个令牌基准的及时,AMD Ryzen AI Max+ 395处理器的速度比Llama 3.2 3B指令等较小型号的竞争对手快4倍。
Going up to 7 billion and 8 billion models like the DeepSeek R1 Distill Qwen 7b and DeepSeek R1 Distill Llama 8b, the Ryzen AI Max+ 395 is up to 9.1x faster. When looking at 14 billion parameter models (which is approaching the largest size that can comfortably fit on a standard 32GB laptop), the ASUS ROG Flow Z13 is up to 12.2x faster than the Intel Core Ultra 258V powered laptop - more than an order of magnitude faster than the competition!
Ryzen AI Max+ 395升至70亿和80亿型号,例如DeepSeek R1 Distill Qwen 7b和DeepSeek R1 Distill Llama 8B,最高9.1倍。当查看140亿个参数型号(即接近最大的尺寸,可以舒适地适合标准的32GB笔记本电脑)时,Asus Rog Flow Z13的速度比Intel Core Ultra 258V驱动的笔记本电脑快12.2倍 - 比竞争速度快得多!
The larger the LLM, the faster AMD Ryzen AI Max+ 395 processor is in responding to the user query. So whether you are having a conversation with the model or giving it large summarization tasks involving thousands of tokens - the AMD machine will be much faster to respond. This advantage scales with the prompt length - so the heavier the task - the more pronounced the advantage will be.
LLM越大,更快的AMD Ryzen AI Max+ 395处理器正在响应用户查询。因此,无论您是与模型进行对话,还是给它涉及数千个令牌的大量摘要任务 - AMD机器的响应速度都会更快。此优势以及时的长度扩展 - 因此,任务越重 - 优势就会越明显。
Text-only LLMs are also slowly getting replaced with highly capable multi-modal models that have vision adapters and visual reasoning capabilities. The IBM Granite Vision is one example and the recently launched Google Gemma 3 family of models is another - with both providing highly capable vision capabilities to next generation AMD AI PCs. Both of these models run incredibly performantly on an AMD Ryzen AI MAX+ 395 processor.
仅文本LLM也逐渐被具有视觉适配器和视觉推理功能的高功能多模型模型所取代。 IBM Granite Vision是一个例子,最近推出的Google Gemma 3模型家族是另一个例子 - 两者都为下一代AMD AI PC提供了高功能的视觉功能。这两个模型都在AMD Ryzen AI Max+ 395处理器上表现出色。
An interesting point to note here: when running vision models, the time to first token metric also effectively becomes the time it takes for the model to analyze the image you give it.
这里要注意的一个有趣的点是:运行视觉模型时,首先标记度量的时间也有效地成为了模型分析您给出的图像所需的时间。
The Ryzen AI Max+ 395 processor is up to 7x faster in IBM Granite Vision 3.2 3b, up to 4.6x faster in Google Gemma 3 4b and up to 6x faster in Google Gemma 3 12b. The ASUS ROG Flow Z13 came with a 64GB memory option so it can also effortlessly run the Google Gemma 3 27B Vision model - which is currently considered the current SOTA (state of
Ryzen AI Max+ 395处理器在IBM Granite Vision 3.2 3B中的速度快7倍,在Google Gemma 3 4B中,最高4.6倍,在Google Gemma 3 12b中速度快6倍。华硕ROG Flow Z13带有64GB的内存选项
免责声明:info@kdj.com
所提供的信息并非交易建议。根据本文提供的信息进行的任何投资,kdj.com不承担任何责任。加密货币具有高波动性,强烈建议您深入研究后,谨慎投资!
如您认为本网站上使用的内容侵犯了您的版权,请立即联系我们(info@kdj.com),我们将及时删除。
-
- 鲸鱼正在购买总督蘸酱
- 2025-03-19 21:31:00
- 链上的数据表明,在价格崩溃中,鲸鱼继续积累狗狗币。
-
- Binance Coin(BNB)价格难以超过$ 640的供应区
- 2025-03-19 21:31:00
- 随着市场上波动的增长,BNB Price见证了盘中的回调1.72%。
-
- 尽管价格下跌,但Dogecoin(Doge)鲨鱼和鲸鱼钱包仍在扩大
- 2025-03-19 21:31:00
- 链上的数据显示,Dogecoin鲨鱼和鲸鱼钱包最近的数量一直在增加,这一标志可能是Doge的价格。
-
- 标准包机将以太坊(ETH)价格目标降低至$ 4000
- 2025-03-19 21:31:00
- Ethereum(ETH)因全球银行标准而受到打击,将其年终价格预期从10000美元降低到仅4000美元。
-
-
- 比特币期货市场经历重大重置,开放兴趣下降了14%
- 2025-03-19 21:31:00
- 比特币期货市场最近几周经历了重大的重置,而交易量开始反弹,开放兴趣急剧下降。
-
-
-
- 加密货币市场继续表现出波动性,比特币(BTC)下跌低于$ 83,000
- 2025-03-19 21:25:59
- 加密货币市场继续表现出波动性,比特币(BTC)下跌低于$ 83,000