Over the last several years, machine learning (ML) has rapidly expanded the capabilities of artificial intelligence (AI) in the world of IT. Today, that expansion has entered the domain of generative AI.
Generative AI is making it easier to innovate faster and reduce the number of hours needed for development. This provides you with more time to grow your business.
Enables generative AI applications but does not integrate directly into IDEs for coding assistance.
Amazon Bedrock is a fully managed service that makes FMs from Amazon and leading AI startups available through an API. This means you can choose from various FMs to find the model that's best suited for your use case.
Amazon Bedrock makes it easier for developers to create generative AI applications that can deliver up-to-date answers based on proprietary knowledge sources. They can also complete tasks for a wide range of use cases.
An AI-powered coding assistant, not an ML training infrastructure.
Amazon Q Developer is an AI coding companion that generates real-time, single-line or full-function code suggestions in your IDE to help you quickly build software.
With Amazon Q Developer, you can write a comment in natural language that outlines a specific task in English, such as “Upload a file with server-side encryption.” Based on this information, Amazon Q Developer recommends one or more code snippets directly in the IDE that can accomplish the task.
You can quickly and easily accept the top suggestion (Tab key), view more suggestions (arrow keys), or continue writing your own code.
You should always review a code suggestion before accepting them, and you might need to edit it to ensure it does exactly what you intended.
A custom machine learning chip for inference tasks, not an IDE assistant.
AWS Inferentia is a custom machine learning chip designed by AWS that you can use for high-performance inference predictions.
In order to use the chip, set up an Amazon Elastic Compute Cloud instance and use the AWS Neuron software development kit (SDK) to invoke the Inferentia chip.
To provide customers with the best Inferentia experience, Neuron has been built into the AWS Deep Learning AMIs (DLAMI). AWS Inferentia accelerators are designed by AWS to deliver high performance at the lowest cost for your deep learning (DL) inference applications.
AWS Trainium is an AWS-designed DL training accelerator that delivers high performance and cost-effective DL training on AWS.
Amazon EC2 Trn1 instances, powered by AWS Trainium, deliver the highest performance on DL training of popular natural language processing (NLP) models on AWS.
Trn1 instances offer up to 50% cost-to-train savings over comparable Amazon EC2 instances.
Trainium has been optimized for training NLP, computer vision, and recommended models used in a broad set of applications. These applications include text summarization, code generation, question answering, image and video generation, recommendation, and fraud detection.
Provides ML model deployment solutions, not code generation.
SageMaker JumpStart helps you quickly and easily get started with ML. SageMaker JumpStart provides a set of solutions for the most common use cases that can be deployed readily in just a few steps. The solutions are fully customizable and showcase the use of AWS CloudFormation templates and reference architectures so you can accelerate your ML journey. SageMaker JumpStart also provides foundation models and supports one-step deployment and fine-tuning of more than 150 popular open-source models, such as transformer, object detection, and image classification models.