![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
Google於3月12日揭示了下一代開放模型Gemma 3,這是統治快速發展的人工智能(AI)部門的競賽。
Google on Monday unveiled the next generation of its open model, Gemma 3, in a move to further its footprint in the rapidly-evolving artificial intelligence (AI) sector.
Google週一推出了下一代開放模型Gemma 3,以促進其在快速發展的人工智能(AI)領域的足跡。
Gemma 3, a family of lightweight open models, has been built from the same research and technology that powers its flagship Gemini 2.0 AI models, the tech giant said.
這家科技巨頭說,傑瑪3(Gemma 3)是一個輕巧的開放式模型,是由相同的研究和技術建造的,該研究和技術為其旗艦Gemini 2.0 AI模型提供動力。
These models are designed to run fast, directly on devices — from phones and laptops to workstations — helping developers create AI applications.
這些模型旨在直接在設備(從手機和筆記本電腦到工作站)上快速運行,以幫助開發人員創建AI應用程序。
The company claimed that Gemma 3 is the most capable model one can run on a single graphics processing unit (GPU) or tensor processing unit (TPU), outperforming Meta's Llama-405B, DeepSeek-V3 and OpenAI's o3-mini in preliminary human preference evaluations on LMArena's leaderboard.
該公司聲稱,Gemma 3是可以在單個圖形處理單元(GPU)或張量處理單元(TPU)上運行的最有能力的模型,表現優於Meta的Llama-405B,DeepSeek-V3和OpenAI的O3-Mini,在Lmarena的“ Lmarena”排行榜上進行了預賽的人類偏好評估。
Google introduced the Gemma family of open models in February as part of its strategy to attract developers and researchers to its AI offerings and compete with Meta's Llama, which also provides open AI models.
Google在2月推出了Gemma的開放模型家族,這是其策略的一部分,旨在吸引開發人員和研究人員參加其AI產品,並與Meta的Llama競爭,Meta的Llama也提供了開放的AI模型。
The company said these models have been downloaded over 100 million times, and the developer community has created more than 60,000 Gemma variants to date.
該公司表示,這些型號已下載了超過1億次,開發人員社區迄今為止創建了60,000多個Gemma變體。
Gemma 3 will be available in a range of sizes — 1B, 4B, 12B and 27B parameters — and offer out-of-the-box support for over 35 languages and pretrained support for over 140 languages with a 128k-token context window. It also has the ability to analyse images, text, and short videos.
Gemma 3將提供各種尺寸 - 1B,4B,12B和27B參數 - 並為超過35種語言提供現成的支持,並預估計了140多種語言,並具有128K token的上下文窗口。它還具有分析圖像,文本和簡短視頻的能力。
Read: Google woos India’s booming AI developer community with new tools, access to latest models
閱讀:Google Woos印度蓬勃發展的AI開發人員社區,使用新工具,訪問最新模型
Gemma 3 integrates with developer tools such as Hugging Face Transformers, Ollama, JAX, Keras, PyTorch and others. Developers can access Gemma 3 through Google's free web-based developer tool AI Studio, or download the model from Hugging Face or Kaggle. One can request access to the Gemma 3 API through AI Studio.
Gemma 3與開發人員工具相結合,例如擁抱Face Transformers,Ollama,Jax,Keras,Pytorch等。開發人員可以通過Google的免費基於Web的開發人員工具AI Studio訪問Gemma 3,或從擁抱面或Kaggle下載該模型。可以通過AI工作室要求訪問Gemma 3 API。
The launch comes in the backdrop of Chinese AI lab DeepSeek claiming to have built AI models that can rival top-tier models from Google, and other US companies such as OpenAI and Meta at a fraction of the cost. The launch earlier this year caused fresh concerns among investors over the billions of dollars being poured in by tech companies to develop their AI models and products.
該發布會是在中國AI Lab DeepSeek的背景下聲稱建立了AI模型,可以與Google和其他美國公司(例如OpenAI和Meta)相抗衡的AI模型,而成本的一小部分。今年早些時候的發布引起了投資者的新問題,因為科技公司為開發其AI模型和產品而傾注了數十億美元。
In February, Sundar Pichai, the CEO of Google parent firm Alphabet, however, argued that the search giant's Gemini Flash 2.0 and Flash Thinking 2.0 models are “some of the most efficient models” out there, including compared to DeepSeek's V3 and R1.
2月,Google父母Alphabet的首席執行官Sundar Pichai認為,搜索巨頭的Gemini Flash 2.0和Flash Thinky 2.0型號是“一些最有效的型號”,包括與DeepSeek的V3和R1相比。
"I think part of the reason we are so excited about the AI opportunity is we know we can drive extraordinary use cases because the cost of actually using it is going to keep coming down, which will make more use cases feasible. And that's the opportunity space. It's as big as it comes. And that's why you're seeing us invest to meet that moment" Pichai said during the company's earnings conference call.
“我認為,我們對AI機會感到非常興奮的部分原因是我們知道我們可以駕駛非凡的用例,因為實際使用它的成本將繼續下降,這將使更多的用例變得可行。這就是機會空間。它和它一樣大。
Alphabet plans to invest around $75 billion in capital expenditures in 2025 to bolster its AI efforts. The investment will be made towards building out technical infrastructure, primarily for servers, followed by data centers and networking.
Alphabet計劃在2025年投資約750億美元的資本支出來加強AI的努力。這項投資將用於建立技術基礎設施,主要用於服務器,然後是數據中心和網絡。
免責聲明:info@kdj.com
所提供的資訊並非交易建議。 kDJ.com對任何基於本文提供的資訊進行的投資不承擔任何責任。加密貨幣波動性較大,建議您充分研究後謹慎投資!
如果您認為本網站使用的內容侵犯了您的版權,請立即聯絡我們(info@kdj.com),我們將及時刪除。
-
-
-
- 隨著市場崩潰,LTC跌至一個月的低點
- 2025-03-12 23:10:50
- 隨著本週供應量的努力,LTC失去了關鍵的心理水平,並降至一個月的低點。
-
-
-
- 著名的分析師預測2025年的XRP價格上漲,給了兩個可能的高峰月份
- 2025-03-12 23:10:50
- 儘管XRP繼續努力擺脫目前的看跌狀態,但著名的加密分析師Egrag仍然對XRP達到頂峰仍然樂觀
-
- Qubetics(TICS):本週為高ROI投資的最佳加密貨幣預售
- 2025-03-12 23:10:50
- 比特幣(BTC)帶著改變遊戲規則的政策提案重新成為焦點
-
-
- 第十區塊鏈峰會將團結關鍵行業的數字,討論比特幣作為戰略儲備的作用
- 2025-03-12 23:05:50
- 該活動由數字室組織,該活動被稱為第一個也是最大的區塊鏈貿易協會。