Google Gemma AI now available

It’s great that the largest tech companies are developing the latest artificial intelligence projects. However, experts outside these firms must have a way to create AI to truly spark an AI revolution worldwide. Fortunately, Google has taken the first two steps in bringing the power of AI into the public’s hands: Gemma 2B and 7B.

These are open models, meaning developers and researchers can customize and fine-tune them however they like. If you have the right IT and programming skills, 2B and 7B can help mold your dream AI project. More importantly, the search engine company added the Responsible Generative AI Toolkit to ensure ethical use.

What do we know about Google Gemma?

Google 2B and 7B are “open models,” meaning people could adjust them however they see fit. However, that doesn’t mean they’re open-source or free.

Google’s VP and General Manager for Developer X, Jeanine Banks, elaborated on open models in a TechCrunch report. “[Open models] has become pretty pervasive now in the industry,” Banks said.

“And it often refers to open weights models, where there is wide access for developers and researchers to customize and fine-tune models but, at the same time, the terms of use — things like redistribution, as well as ownership of those variants that are developed — vary based on the model’s own specific terms of use.”

“And so we see some difference between what we would traditionally refer to as open source and we decided that it made the most sense to refer to our Gemma models as open models.”

The official Gemma website does not provide further details on how these AI models work. However, Google says they are “state-of-the-art.”

“The generation quality has gone significantly up in the last year,” Google DeepMind product management director Tris Warkentin said. 

“Things that previously would have been the remit of extremely large models are now possible with state-of-the-art smaller models.” 

READ: Meta launches open-source AI language translator

This unlocks completely new ways of developing AI applications that we’re pretty excited about, including being able to run inference and do tuning on your local developer desktop or laptop with your RTX GPU or on a single host in GCP with Cloud TPUs, as well.”

Developers can start using Google Gemma by accessing the ready-to-use Colab and Kaggle notebooks. Also, they’ll gain access to Hugging Face, Nvidia’s NeMo, and MaxText. 

Gemma 2B and 7B can run everywhere after pre-training and tuning. Learn more about the latest AI tools at Inquirer Tech.

Read more...