Amazon and Anthropic have formed a strategic collaboration that includes a $4BN investment in Anthropic by Amazon in exchange for AWS becoming Anthropic’s primary cloud provider for mission-critical workloads.
That’s just the start. Anthropic will use AWS Ttainium and Inferentia chips to create and deploy its foundation models, gaining the platform benefits of AWS in the process.
Moreover, Anthropic will also offer AWS customers access to these foundation models via Amazon Bedrock, with early access to new features.
Andy Jassy, Amazon CEO, commented on the strategic alliance:
We have tremendous respect for Anthropic’s team and foundation models, and believe we can help improve many customer experiences, short and long-term, through our deeper collaboration.
“Customers are quite excited about Amazon Bedrock, AWS’s new managed service that enables companies to use various foundation models to build generative AI applications on top of, as well as AWS Trainium, AWS’s AI training chip, and our collaboration with Anthropic should help customers get even more value from these two capabilities.”
Anthropic’s Claude
Anthropic – an AWS customer since 2021 – has quickly become a leading name in the LLM space through the advent of ‘Claude’.
Claude is a ChatGPT competitor that performs an array of tasks, including creative content and sophisticated dialogue generation, complex reasoning, and detailed instruction.
Many customer experience vendors have latched ahold of Claude, including Zoom – which integrated the virtual assistant into its CCaaS platform in May.
Where it differs from the competition is in its use of Constitutional AI – which removes humans from the training process.
Instead, it learns from various knowledge sources and follows a set of rules to determine whether outputs are unethical or harmful.
Anthropic believes this approach will enable it to innovate more quickly than its competitors.
As a result, the AI leader believes Claude is simpler to interact with, less likely to produce harmful outputs, and easier to steer than other foundational models.
In fact, Claude 2 achieves over the 90th percentile on quantitative reasoning and the GRE reading and writing exams.
The Three Layers of AWS
The strategic collaboration with Anthropic supports AWS’ plans to expand “all three layers” of its generative AI stack.
The bottom layer comprises NVIDIA compute instances and AWS’s custom silicon chips, which include AWS Trainium for AI training and AWS Inferentia for AI inference.
The middle layer focuses on offering customers a selection of foundational models via its service, Amazon Bedrock, which was introduced in July 2023.
The top layer of the stack is AWS’ generative AI applications and services offerings like Amazon CodeWhisperer.
Anthropic fits into AWS’ middle layer, providing customers with early access to Anthropic models, allowing them to utilise their own proprietary data and fine-tune capabilities through a self-service feature within Amazon Bedrock.
AWS and Anthropic will help customers of all sizes develop GenAI applications for their businesses through teams of AI experts in the AWS Generative AI Innovation Center.
Dario Amodei, Co-Founder and CEO of Anthropic, expressed his enthusiasm for the expanded partnership: “We are excited to use AWS’s Trainium chips to develop future foundation models.
“Since announcing our support of Amazon Bedrock in April, Claude has seen significant organic adoption from AWS customers.
“By significantly expanding our partnership, we can unlock new possibilities for organizations of all sizes as they deploy Anthropic’s safe, state-of-the-art AI systems together with AWS’s leading cloud technology.”
For more on the business applications of Bedrock, read our article: AWS Introduces ‘Agents for Amazon Bedrock’