Published on 2023-11-11
Google and Anthropic have announced the expansion of their collaboration agreement to support the development of Artificial Intelligence (AI) and to implement at scale the TPUv5e platforms of Google Cloud, now generally available.
This announcement took place in the context of the AI Security Summit (AISS), held a few days ago in the United Kingdom, where the Bletchey Declaration was also formalized.
Anthropic is a research and security company based in San Francisco (United States) with which Google has closely worked since its foundation in 2021. Together, they have presented new updates that support the development of this technology, a partnership they have recently expanded to continue working 'boldly and responsibly'.
Thomas Kurian, the CEO of Google Cloud, commented that this agreement 'will bring AI to more people safely and is another example of how the newest, most innovative, and fastest-growing AI companies are being built on Google Cloud'.
From Anthropic, it is expected that this collaboration will continue to focus on 'making targeted, reliable, and interpretable AI systems available to more companies around the world', in the words of its co-founder and CEO, Dario Amodei.
Firstly, it was announced that Anthropic would be one of the first companies to implement at scale the TPUv5e chips of Google Cloud, the 'most cost-effective and scalable' AI accelerator of the brand to date, as indicated in a statement.
On the other hand, it was mentioned that Anthropic now also uses Google Cloud's security services, including Google Cloud, Chronicle Security Operations, Secure Enterprise Browsing, and Security Command Center, to ensure that organizations deploying Anthropic's models on Google Cloud are protected against cyber threats.
To conclude, Google commented that it has committed with Anthropic to advance in AI security, announcing their joint collaboration with the non-profit organization MLCommons as part of a new AI Safety Benchmarking working group.
NEW UPDATES FOR TPU V5E
Along with the expansion of its agreement with Anthropic, Google has announced that TPU v5e is now generally available and provides customers with a unified TPU platform for both training and inference workloads.
Regarding this platform, the brand stated that it offers 2.3 times higher training performance per dollar compared to the previous generation and that with the large-scale training technology of Multislice, Anthropic can scale its large language models (LLM) 'beyond the physical limits of a single TPU module, up to tens of thousands of interconnected chips'.
Since the platform was launched last August, Google's customers have adopted TPU v5e 'for a wide range of workloads that span training and serving AI models', according to Google.
Furthermore, the company noted that its Singlehost Inference and Multislice Training technologies -which enable large-scale AI model training- are also available to all users, introduced last September.
According to the brand, these technologies bring 'cost-effectiveness, scalability, and versatility to Google Cloud customers, with the ability to use a unified TPU platform for training and inference workloads'.
COMMENTS
No customer comments for the moment.