Market Cap: $2.6362T 1.470%
Volume(24h): $112.4311B -28.370%
  • Market Cap: $2.6362T 1.470%
  • Volume(24h): $112.4311B -28.370%
  • Fear & Greed Index:
  • Market Cap: $2.6362T 1.470%
Cryptos
Topics
Cryptospedia
News
CryptosTopics
Videos
Top News
Cryptos
Topics
Cryptospedia
News
CryptosTopics
Videos
bitcoin
bitcoin

$82504.844555 USD

1.26%

ethereum
ethereum

$1892.689239 USD

-1.30%

tether
tether

$0.999740 USD

-0.02%

xrp
xrp

$2.203057 USD

3.15%

bnb
bnb

$557.061224 USD

-0.56%

solana
solana

$124.046062 USD

0.09%

usd-coin
usd-coin

$0.999945 USD

-0.01%

cardano
cardano

$0.733683 USD

0.16%

dogecoin
dogecoin

$0.166831 USD

3.95%

tron
tron

$0.221371 USD

-3.87%

pi
pi

$1.656984 USD

20.95%

unus-sed-leo
unus-sed-leo

$9.902995 USD

1.65%

hedera
hedera

$0.200991 USD

0.34%

chainlink
chainlink

$13.098866 USD

0.86%

stellar
stellar

$0.254987 USD

0.46%

Cryptocurrency News Articles

Gemma 3 models have been downloaded over 100 million times

Mar 12, 2025 at 05:21 pm

Google on March 12 unveiled the next generation of its open model, Gemma 3, as the race to dominate the rapidly-evolving artificial intelligence (AI) sector intensifies.

Gemma 3 models have been downloaded over 100 million times

Google on Monday unveiled the next generation of its open model, Gemma 3, in a move to further its footprint in the rapidly-evolving artificial intelligence (AI) sector.

Gemma 3, a family of lightweight open models, has been built from the same research and technology that powers its flagship Gemini 2.0 AI models, the tech giant said.

These models are designed to run fast, directly on devices — from phones and laptops to workstations — helping developers create AI applications.

The company claimed that Gemma 3 is the most capable model one can run on a single graphics processing unit (GPU) or tensor processing unit (TPU), outperforming Meta's Llama-405B, DeepSeek-V3 and OpenAI's o3-mini in preliminary human preference evaluations on LMArena's leaderboard.

Google introduced the Gemma family of open models in February as part of its strategy to attract developers and researchers to its AI offerings and compete with Meta's Llama, which also provides open AI models.

The company said these models have been downloaded over 100 million times, and the developer community has created more than 60,000 Gemma variants to date.

Gemma 3 will be available in a range of sizes — 1B, 4B, 12B and 27B parameters — and offer out-of-the-box support for over 35 languages and pretrained support for over 140 languages with a 128k-token context window. It also has the ability to analyse images, text, and short videos.

Read: Google woos India’s booming AI developer community with new tools, access to latest models

Gemma 3 integrates with developer tools such as Hugging Face Transformers, Ollama, JAX, Keras, PyTorch and others. Developers can access Gemma 3 through Google's free web-based developer tool AI Studio, or download the model from Hugging Face or Kaggle. One can request access to the Gemma 3 API through AI Studio.

The launch comes in the backdrop of Chinese AI lab DeepSeek claiming to have built AI models that can rival top-tier models from Google, and other US companies such as OpenAI and Meta at a fraction of the cost. The launch earlier this year caused fresh concerns among investors over the billions of dollars being poured in by tech companies to develop their AI models and products.

In February, Sundar Pichai, the CEO of Google parent firm Alphabet, however, argued that the search giant's Gemini Flash 2.0 and Flash Thinking 2.0 models are “some of the most efficient models” out there, including compared to DeepSeek's V3 and R1.

"I think part of the reason we are so excited about the AI opportunity is we know we can drive extraordinary use cases because the cost of actually using it is going to keep coming down, which will make more use cases feasible. And that's the opportunity space. It's as big as it comes. And that's why you're seeing us invest to meet that moment" Pichai said during the company's earnings conference call.

Alphabet plans to invest around $75 billion in capital expenditures in 2025 to bolster its AI efforts. The investment will be made towards building out technical infrastructure, primarily for servers, followed by data centers and networking.

Disclaimer:info@kdj.com

The information provided is not trading advice. kdj.com does not assume any responsibility for any investments made based on the information provided in this article. Cryptocurrencies are highly volatile and it is highly recommended that you invest with caution after thorough research!

If you believe that the content used on this website infringes your copyright, please contact us immediately (info@kdj.com) and we will delete it promptly.

Other articles published on Mar 13, 2025