Google updates Vector AI to let enterprises train GenAI on their own data

[ad_1]

Google Cloud logo

Cesc Maymo/Getty Images

At its annual customer and partner conference, Google Cloud Next ’23, Google’s Google Cloud unit on Tuesday morning unveiled updates to its suite of tools for deploying machine learning models, Vertex AI, including updates to its “foundation model,” PaLM2, new models from third parties, such as Meta’s Llama 2, and extensions to provide access to enterprise data. 

The Vertex AI announcements accompany several other announcements at this year’s show, including a collaboration program called Duet AI for Workspace; new developer capabilities, including Duet AI for Developers; and new security features, including Duet AI: Mandiant Threat Intelligence.

Also: Google Workspace’s AI facelift is finally here. Meet Duet AI for Workspace

The highlight of the updates for enterprises will be the extensions to Vertex AI that let companies integrate their own data. Among other things, the extensions can integrate the Google-hosted models into enterprise apps such as CRM or email.

As Google stated in prepared remarks, “Developers can access, build, and manage extensions that deliver real-time information, incorporate company data, and take action on the user’s behalf; this opens up endless new possibilities for genAl applications that can operate as an extension of your enterprise.”

The announcement comes as competitor OpenAI on Monday unveiled the enterprise version of its ChatGPT program

Also: Google Cloud expands developer tools and data analytics capabilities with generative AI

Google also announced an Enterprise version of its Colab notebooks system for writing Python in a browser. The new version lets data scientists develop the workflows of serving machine learning models with enhanced security and compliance features.

On the model-serving side, Google said it expanded PaLM 2, the second version of its Pathways Language Model, released in May, by increasing what’s called the “context window,” the amount of user input processed at each pass by the model when it is calculating an output. 

Also: Train AI models with your own data to mitigate risks

That increase, it said, means that “enterprises can easily process longer form documents like research papers and books.” Google did not disclose the context window length when it released its technical report in May. 

The company also announced new “tuning” options for its image generation model, Imagen. Imagen can now be tuned using what’s called Style Tuning, which lets a company create images “aligned to their specific brand guidelines or other creative needs” with a small number of reference images. 



[ad_2]

Source link