市值: $2.6844T 0.700%
體積(24小時): $104.1076B 9.910%
  • 市值: $2.6844T 0.700%
  • 體積(24小時): $104.1076B 9.910%
  • 恐懼與貪婪指數:
  • 市值: $2.6844T 0.700%
加密
主題
加密植物
資訊
加密術
影片
頭號新聞
加密
主題
加密植物
資訊
加密術
影片
bitcoin
bitcoin

$82951.790245 USD

-0.70%

ethereum
ethereum

$1791.465527 USD

-1.83%

tether
tether

$0.999717 USD

-0.01%

xrp
xrp

$2.055970 USD

0.14%

bnb
bnb

$593.238692 USD

-1.32%

usd-coin
usd-coin

$1.000032 USD

0.02%

solana
solana

$115.381354 USD

-4.13%

dogecoin
dogecoin

$0.161732 USD

-2.67%

cardano
cardano

$0.649656 USD

-0.44%

tron
tron

$0.239261 USD

1.04%

unus-sed-leo
unus-sed-leo

$9.561241 USD

1.74%

toncoin
toncoin

$3.530703 USD

-6.73%

chainlink
chainlink

$12.739766 USD

-3.87%

stellar
stellar

$0.259841 USD

-2.48%

avalanche
avalanche

$18.093210 USD

-3.52%

加密貨幣新聞文章

Meta AI負責人Yann Lecun對LLMS失去了興趣,尋找下一代建築

2025/03/20 00:06

Meta AI負責人Yann Lecun表示,他對LLM不再感興趣,並且正在尋求下一代AI架構,這些建築將能夠更好地對現實世界進行建模。

Meta AI負責人Yann Lecun對LLMS失去了興趣,尋找下一代建築

Meta AI Chief Yann LeCun has said that he’s no longer interested in Large Language Models or LLMs, and is looking to next-generation AI architectures that’ll be able to better model the real world.

Meta AI負責人Yann Lecun表示,他不再對大型語言模型或LLM感興趣,並且正在尋求下一代AI體系結構,可以更好地對現實世界進行建模。

Speaking at the NVIDIA GTC 2025 event, LeCun said that these new AI architectures should enable AI to think more like humans, with persistent memory and the ability to think through complex problems.

Lecun在NVIDIA GTC 2025事件上發表講話時說,這些新的AI體系結構應該使AI能夠像人類一樣思考,並具有持久的記憶和通過複雜問題進行思考的能力。

“I am not interested anymore in LLMs,” LeCun said. “They are just token generators and those are limited because tokens are in discrete space. I am more interested in next-gen model architectures, that should be able to do 4 things: understand physical world, have persistent memory and ultimately be more capable to plan and reason.”

“我對LLM不再感興趣,” Lecun說。 “它們只是令牌生成器,並且由於令牌在離散的空間中而受到限制。我對下一代模型體系結構更感興趣,應該能夠做4件事:了解物理世界,具有持久的記憶,並最終更有能力計劃和推理。”

This isn’t the first time that LeCun has spoken about the limitations of LLMs. The models largely learn through text data, and he’s previously said that this is insufficient for achieving human-level AI.

這不是Lecun第一次談論LLM的局限性。這些模型在很大程度上通過文本數據學習,他此前曾表示,這不足以實現人級AI。

“A typical large language model is trained with something on the order of 20 trillion tokens or words,” LeCun had said. “That’s about 10 to the power 14 bytes; one with 14 zeros behind it. It’s an enormous amount of information.”

萊肯說:“典型的大型語言模型經過了20萬億代幣或單詞的序列訓練。” “大約是14個字節的大約10個;一個在其後面有14個零。這是大量信息。”

“But then you compare this with the amount of information that gets to our brains through the visual system in the first four years of life, and it’s about the same amount. In four years, a young child has been awake a total of about 16,000 hours. The amount of information getting to the brain through the optic nerve is about 2 megabytes per second. Do the calculation and that’s about 10 to the power 14 bytes. It’s about the same. In four years a young child has seen as much information or data as the biggest LLMs.”

但是,您將其與在生命的頭四年通過視覺系統傳達給我們的大腦的信息的數量,並且大約相同。在四年內,一個年幼的孩子總共醒了約16,000小時。通過視神經的大腦進入大腦的信息量大約是每秒2兆字節。每年大約有10年的年輕人,大約有一個年輕的年輕人。數據是最大的LLM。”

This, he adds, means that we’re never going to get to human-level AI by just training on text. We’re going to have to get systems to understand the real world.

他補充說,這意味著我們永遠不會通過對文本進行培訓來進入人級AI。我們將不得不了解系統來了解現實世界。

LeCun has now said that he’s not interested in LLMs any more. Indeed, Meta itself hasn’t released a new version of Llama in a while, and DeepSeek and other companies have taken over the mantle of releasing the top open-source models.

Lecun現在表示,他對LLM不再感興趣。的確,梅塔本身並沒有在一段時間內發布新版本的駱駝,而DeepSeek和其他公司已經接管了發布頂級開源車型的披風。

LeCun’s approach seems to be in contrast to OpenAI’s, which has consistently maintained there’s no wall in scaling LLM capabilities, and has instead been coming up with add-ons to LLM training, like test-time compute, to increase their capabilities. It remains to be seen which approach works out, but it appears that Meta is bearish on LLMs, and seems to believe that it’ll take a new technological breakthrough to dramatically improve the performance of current AI systems.

Lecun的方法似乎與OpenAI的方法形成鮮明對比,Openai的方法始終保持縮放LLM功能沒有牆壁,而是在LLM培訓(例如測試時間計算)中提出了附加功能,以提高其功能。尚待觀察,哪種方法可以奏效,但是Meta在LLM上似乎是看跌,並且似乎認為將需要新的技術突破來顯著提高當前AI系統的性能。

免責聲明:info@kdj.com

所提供的資訊並非交易建議。 kDJ.com對任何基於本文提供的資訊進行的投資不承擔任何責任。加密貨幣波動性較大,建議您充分研究後謹慎投資!

如果您認為本網站使用的內容侵犯了您的版權,請立即聯絡我們(info@kdj.com),我們將及時刪除。

2025年04月05日 其他文章發表於