![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
Meta AI负责人Yann Lecun表示,他对LLM不再感兴趣,并且正在寻求下一代AI架构,这些建筑将能够更好地对现实世界进行建模。
Meta AI Chief Yann LeCun has said that he’s no longer interested in Large Language Models or LLMs, and is looking to next-generation AI architectures that’ll be able to better model the real world.
Meta AI负责人Yann Lecun表示,他不再对大型语言模型或LLM感兴趣,并且正在寻求下一代AI体系结构,可以更好地对现实世界进行建模。
Speaking at the NVIDIA GTC 2025 event, LeCun said that these new AI architectures should enable AI to think more like humans, with persistent memory and the ability to think through complex problems.
Lecun在NVIDIA GTC 2025事件上发表讲话时说,这些新的AI体系结构应该使AI能够像人类一样思考,并具有持久的记忆和通过复杂问题进行思考的能力。
“I am not interested anymore in LLMs,” LeCun said. “They are just token generators and those are limited because tokens are in discrete space. I am more interested in next-gen model architectures, that should be able to do 4 things: understand physical world, have persistent memory and ultimately be more capable to plan and reason.”
“我对LLM不再感兴趣,” Lecun说。 “它们只是代币的发电机,而这些发电机是有限的,因为令牌在离散的空间中,我对下一代模型体系结构更感兴趣,应该能够做4件事:了解物理世界,具有持久的记忆并最终有能力计划和推理。”
This isn’t the first time that LeCun has spoken about the limitations of LLMs. The models largely learn through text data, and he’s previously said that this is insufficient for achieving human-level AI.
这不是Lecun第一次谈论LLM的局限性。这些模型在很大程度上通过文本数据学习,他此前曾表示,这不足以实现人级AI。
“A typical large language model is trained with something on the order of 20 trillion tokens or words,” LeCun had said. “That’s about 10 to the power 14 bytes; one with 14 zeros behind it. It’s an enormous amount of information.”
莱肯说:“典型的大型语言模型经过了20万亿代币或单词的序列训练。” “大约是14个字节的大约10个;一个在其后面有14个零。这是大量信息。”
“But then you compare this with the amount of information that gets to our brains through the visual system in the first four years of life, and it’s about the same amount. In four years, a young child has been awake a total of about 16,000 hours. The amount of information getting to the brain through the optic nerve is about 2 megabytes per second. Do the calculation and that’s about 10 to the power 14 bytes. It’s about the same. In four years a young child has seen as much information or data as the biggest LLMs.”
但是,您将其与在生命的头四年通过视觉系统传达给我们的大脑的信息的数量,并且大约相同。在四年内,一个年幼的孩子总共醒了约16,000小时。通过视神经的大脑进入大脑的信息量大约是每秒2兆字节。每年大约有10年的年轻人,大约有一个年轻的年轻人。数据是最大的LLM。”
This, he adds, means that we’re never going to get to human-level AI by just training on text. We’re going to have to get systems to understand the real world.
他补充说,这意味着我们永远不会通过对文本进行培训来进入人级AI。我们将不得不了解系统来了解现实世界。
LeCun has now said that he’s not interested in LLMs any more. Indeed, Meta itself hasn’t released a new version of Llama in a while, and DeepSeek and other companies have taken over the mantle of releasing the top open-source models.
Lecun现在表示,他对LLM不再感兴趣。的确,梅塔本身并没有在一段时间内发布新版本的骆驼,而DeepSeek和其他公司已经接管了发布顶级开源车型的披风。
LeCun’s approach seems to be in contrast to OpenAI’s, which has consistently maintained there’s no wall in scaling LLM capabilities, and has instead been coming up with add-ons to LLM training, like test-time compute, to increase their capabilities. It remains to be seen which approach works out, but it appears that Meta is bearish on LLMs, and seems to believe that it’ll take a new technological breakthrough to dramatically improve the performance of current AI systems.
Lecun的方法似乎与OpenAI的方法形成鲜明对比,Openai的方法始终保持缩放LLM功能没有墙壁,而是在LLM培训(例如测试时间计算)中提出了附加功能,以提高其功能。尚待观察,哪种方法可以奏效,但是Meta在LLM上似乎是看跌,并且似乎认为将需要新的技术突破来显着提高当前AI系统的性能。
免责声明:info@kdj.com
所提供的信息并非交易建议。根据本文提供的信息进行的任何投资,kdj.com不承担任何责任。加密货币具有高波动性,强烈建议您深入研究后,谨慎投资!
如您认为本网站上使用的内容侵犯了您的版权,请立即联系我们(info@kdj.com),我们将及时删除。
-
-
-
-
-
-
- Solaxy(SOLX)预售涉及3000万美元的里程碑
- 2025-04-05 03:40:12
- Solaxy是第2层模因硬币,旨在增强Solana的可扩展性和解决网络拥塞,同时削减交易成本。
-
-
- 象征性的黄金资本化已经超过了12亿美元的大关
- 2025-04-05 03:35:12
- 这种增长是由于黄金价格飙升和对区块链资产的兴趣日益增加所致。对金牌加密资产的兴趣日益加剧,是使存储现代化的更广泛运动的一部分
-