AWS re:Invent 2024 – Day 2 Recap

AWS Trainium2 instances now generally available

AWS announced the general availability of AWS Trainium2-powered Amazon EC2 instances, designed for high-performance deep learning and generative AI workloads. These instances deliver 30-40% better price performance than GPU-based EC2 instances. With 16 Trainium2 chips and ultra-fast NeuronLink, they provide 20.8 petaflops of compute, ideal for training large models. 

For larger models, Trn2 UltraServers connect four Trn2 instances, enabling scale across 64 Trainium2 chips. These servers accelerate training, improve inference performance, and power trillion-parameter models. Project Rainier, a collaboration with Anthropic, will create the world’s largest AI compute cluster using these UltraServers. 

Trn2 instances are generally available today in the US East (Ohio) AWS Region, with availability in additional regions coming soon. Trn2 UltraServers are available in preview. 

Trainium3 chips—designed for high-performance needs of next frontier of generative AI workloads

AWS announced Trainium3, its next generation AI chip, that will allow customers to build bigger models faster and deliver superior real-time performance when deploying them. It will be the first AWS chip made with a 3-nanometer process node, setting a new standard for performance, power efficiency, and density. Trainium3-powered UltraServers are expected to be four times more performant than Trn2 UltraServers, allowing customers to iterate even faster when building models and deliver superior real-time performance when deploying them. The first Trainium3-based instances are expected to be available in late 2025. 

New database capabilities announced including Amazon Aurora DSQL—the fastest distributed SQL database 

AWS announced enhancements to Amazon Aurora and Amazon DynamoDB, offering strong consistency, low latency, and global scalability. 

  • Amazon Aurora DSQL: This serverless, distributed SQL database supports 99.999% multi-Region availability and delivers four times faster reads and writes compared to other distributed SQL databases. It eliminates trade-offs between low latency and SQL while providing microsecond sync accuracy. 
  • DynamoDB Enhancements: DynamoDB global tables now offer strong consistency, ensuring real-time access to the latest data without code changes. 

Both Aurora DSQL and DynamoDB enhancements are in preview. 

Introducing Amazon Nova 

Amazon Nova introduces a new generation of foundation models (FMs) capable of processing text, images, and videos. The Amazon Nova models available in Amazon Bedrock include: Amazon Nova Micro, Amazon Nova Lite, Amazon Nova Pro, Amazon Nova Premier, Amazon Nova Canvas, and Amazon Nova Reel – enable applications for multimedia understanding and generation. These models, available through Amazon Bedrock, are designed for speed, cost-efficiency, and ease of integration with customers’ systems. 

Amazon Q Developer reimagines how developers build and operate software with generative AI 

Amazon Q Developer enhancements leverage generative AI to improve software development and operations: 

  • Automated Unit Tests: Amazon Q Developer automates the creation of unit tests, reducing the burden on developers and ensuring complete test coverage with less effort. This helps developers deliver reliable code faster and avoid costly rollbacks. 
  • Documentation Updates: Automates the creation and updating of project documentation, ensuring accuracy and reducing the time developers spend on understanding code. This enables quicker onboarding and more meaningful contributions from team members. 
  • Code Reviews: Amazon Q Developer automates code reviews, providing quick feedback to help developers maintain quality, style, and security standards. This speeds up the review process, saving time and allowing developers to resolve issues earlier. 
  • Operational Issue Resolution: Operational teams can quickly identify and resolve issues across AWS environments with Amazon Q Developer by analyzing vast data points to uncover service relationships and anomalies. It provides actionable hypotheses and guides users through fixes, streamlining issue resolution and reducing downtime. 

These capabilities are now available in IDEs, AWS Management Console, and through GitLab integration. 

Next generation of Amazon SageMaker to deliver unified platform for data, analytics, and AI 

AWS CEO Matt Garman unveiled the next generation of Amazon SageMaker. The revamped Amazon SageMaker integrates analytics, machine learning, and generative AI into a unified platform: 

  • SageMaker Unified Studio: The new unified studio provides a single environment for accessing and acting on data, integrating AWS analytics, ML, and AI tools. Amazon Q Developer helps customers tackle various data use cases with the best tools for the job. 
  • SageMaker Catalog: Amazon SageMaker Catalog provides secure access to data, models, and artifacts, ensuring compliance and enterprise security. Built on Amazon DataZone, it offers governance tools like data classification and toxicity detection to safeguard AI applications. 
  • SageMaker Lakehouse: Amazon SageMaker Lakehouse unifies data across S3, data lakes, Redshift, and federated sources, simplifying analytics and ML tool usage. It supports Apache Iceberg for seamless data processing and offers fine-grained access controls for secure data sharing.  
  • Zero-ETL Integrations: AWS’s zero-ETL integrations with SaaS applications like Zendesk and SAP simplify data access for analytics and AI in SageMaker Lakehouse and Redshift. This eliminates the need for complex data pipelines, speeding up insights and reducing costs. 

The new SageMaker platform enhances collaboration, security, and efficiency for data and AI projects.

AWS strengthens Amazon Bedrock with industry-first AI safeguard, new agent capability, and model customization 

AWS CEO Matt Garman unveiled new Amazon Bedrock capabilities to address key challenges in deploying generative AI. These features tackle hallucination-induced errors, enable orchestration of AI agents for complex tasks, and support smaller, cost-efficient models that rival large models in performance: 

  • Prevent factual errors due to hallucinations: Generative AI models can produce “hallucinations,” limiting trust in critical industries. Amazon Bedrock’s Automated Reasoning checks prevent errors using logical reasoning, ensuring accurate, auditable, and policy-aligned responses via Bedrock Guardrails. 
  • Easily build and coordinate multiple agents to execute complex workflows: Amazon Bedrock Agents enable applications to execute tasks by leveraging AI-powered agents. AWS now supports multi-agent collaboration, allowing customers to coordinate specialized agents for complex workflows, such as financial analysis, across systems and data sources with ease. 
  • Create smaller, faster, more cost-effective models: Amazon Bedrock Model Distillation lets customers create smaller, efficient models by transferring knowledge from larger models, balancing performance, cost, and latency—ideal for real-time applications. It works with models from Anthropic, Meta, and Amazon Nova Models. 

Automated Reasoning checks, multi-agent collaboration, and Model Distillation are all available in preview.