![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
已经开发了机器学习算法来处理许多不同的任务,从做出预测到匹配模式或生成匹配的图像
Recent years have seen a massive increase in the capabilities of machine learning algorithms, which can now perform a wide range of tasks, from making predictions to matching patterns or generating images that match text prompts. To enable them to take on such diverse roles, these models have been given a broad spectrum of capabilities, but one thing they rarely are is efficient.
近年来,机器学习算法的功能大大提高,现在可以执行各种任务,从做出预测到匹配模式或生成匹配文本提示的图像。为了使他们能够扮演如此多样化的角色,这些模型得到了广泛的能力,但是很少有一件事是有效的。
In the present era of exponential growth in the field, rapid advancements often come at the expense of efficiency. It is faster, after all, to produce a very large kitchen-sink model filled with redundancies than it is to produce a lean, mean inferencing machine.
在当前该领域的指数增长时代,快速进步通常以牺牲效率为代价。毕竟,生产一个非常大的厨房清单模型毕业的速度要比生产精益,平均的推论机的速度要快。
But as these present algorithms continue to mature, more attention is being directed at slicing them down to smaller sizes. Even the most useful tools are of little value if they require such a large amount of computational resources that they are impractical for use in real-world applications. As you might expect, the more complex an algorithm is, the more challenging it is to shrink it down. That is what makes Hugging Face’s recent announcement so exciting — they have taken an axe to vision language models (VLMs), resulting in the release of new additions to the SmolVLM family — including SmolVLM-256M, the smallest VLM in the world.
但是,随着这些当前的算法继续成熟,更多的关注是将它们切成小尺寸。即使是最有用的工具,如果它们需要如此大量的计算资源,以至于它们不切实际地用于现实世界应用程序。如您所料,算法越复杂,将其缩小的挑战就越具有挑战性。这就是让Hugging Face最近的公告如此令人兴奋的原因 - 他们将斧头带到了视觉语言模型(VLM),从而释放了Smolvlm家族的新增加 - 包括Smolvlm-256m,这是世界上最小的VLM。
SmolVLM-256M is an impressive example of optimization done right, with just 256 million parameters. Despite its small size, this model performs very well in tasks such as captioning, document-based question answering, and basic visual reasoning, outperforming older, much larger models like the Idefics 80B from just 17 months ago. The SmolVLM-500M model provides an additional performance boost, with 500 million parameters offering a middle ground between size and capability for those needing some extra headroom.
Smolvlm-256M是正确进行优化的一个令人印象深刻的例子,只有2.56亿个参数。尽管尺寸很小,但该模型在字幕,基于文档的问题回答和基本的视觉推理等任务中表现出色,比17个月前的IDEFICS 80B(例如IDEFICS 80B)的表现优于较旧的型号。 SMOLVLM-500M型号提供了额外的性能提升,5亿参数为需要额外额外余量的人提供了大小和能力之间的中间立场。
Hugging Face achieved these advancements by refining its approach to vision encoders and data mixtures. The new models adopt the SigLIP base patch-16/512 encoder, which, though smaller than its predecessor, processes images at a higher resolution. This choice aligns with recent trends seen in Apple and Google research, which emphasize higher resolution for improved visual understanding without drastically increasing parameter counts.
拥抱面孔通过完善其视觉编码器和数据混合物的方法来实现这些进步。新模型采用Siglip Base Patch-16/512编码器,尽管该编码器比其前身小,但以更高的分辨率处理图像。这种选择与Apple和Google Research中的最新趋势保持一致,这些趋势强调了更高的分辨率,以改善视觉理解而不会大幅度增加参数计数。
The team also employed innovative tokenization methods to further streamline their models. By improving how sub-image separators are represented during tokenization, the models gained greater stability during training and achieved better quality outputs. For example, multi-token representations of image regions were replaced with single-token equivalents, enhancing both efficiency and accuracy.
该团队还采用了创新的令牌化方法来进一步简化其模型。通过改善在令牌化过程中如何表示子图像分离器,模型在训练过程中获得了更大的稳定性,并获得了更好的质量产出。例如,图像区域的多to式表示被单词等效物取代,从而提高了效率和准确性。
In another advance, the data mixture strategy was fine-tuned to emphasize document understanding and image captioning, while maintaining a balanced focus on essential areas like visual reasoning and chart comprehension. These refinements are reflected in the model’s improved benchmarks which show both the 250M and 500M models outperforming Idefics 80B in nearly every category.
在另一个进步中,对数据混合物策略进行了微调,以强调文档的理解和图像字幕,同时保持对视觉推理和图表理解等基本领域的平衡关注。这些改进反映在模型的改进基准中,这些基准显示了250m和500m模型在几乎每个类别中的表现优于80B。
By demonstrating that small can indeed be mighty, these models pave the way for a future where advanced machine learning capabilities are both accessible and sustainable. If you want to help bring that future into being, go grab these models now. Hugging Face has open-sourced them, and with only modest hardware requirements, just about anyone can get in on the action.
通过证明小型确实可以是强大的,这些模型为未来的高级机器学习能力既可以访问又可持续铺平了道路。如果您想帮助将未来变成,请立即抓住这些模型。拥抱的脸是开源的,只有适度的硬件要求,几乎任何人都可以采取行动。
免责声明:info@kdj.com
所提供的信息并非交易建议。根据本文提供的信息进行的任何投资,kdj.com不承担任何责任。加密货币具有高波动性,强烈建议您深入研究后,谨慎投资!
如您认为本网站上使用的内容侵犯了您的版权,请立即联系我们(info@kdj.com),我们将及时删除。
-
- 区块链技术的兴起为金融解决方案的新时代铺平了道路
- 2025-03-09 10:45:45
- 分散的金融(DEFI)已成为一种革命性的选择,提供了透明度,安全性和可及性,而无需依赖中介机构。
-
-
- XRP价格预测如果SEC的批准通过
- 2025-03-09 10:30:46
- 如果SEC批准Ripple Labs的申请将其令牌注册为安全性,则本文探讨了XRP的潜在价格
-
- Vechain(VET)将参加伦敦的“缠绕Web3”活动
- 2025-03-09 10:30:46
- 该活动将以引人注目的演讲者和讨论,例如区块链,数字身份和人工智能。
-
- BTFD硬币领导这项指控,筹集了627万美元的预售
- 2025-03-09 10:30:46
- 想象一下,醒来,找到您在模因硬币中的100美元投资,一夜之间变成了六位数。听起来像是梦?
-
-
- Binance Coin(BNB)长期以来一直是加密货币市场的主食
- 2025-03-09 10:30:46
- 随着3月份的临近,专家们预测,BNB的潜在价格上涨,这会增加市场乐观和二进制生态系统中发展效用。
-
-