AWS re:Invent 2024 – Day 2 Recap

AWS Trainium2 instances now generally available

AWS announced the general availability of AWS Trainium2-powered Amazon EC2 instances, designed for high-performance deep learning and generative AI workloads. These instances deliver 30-40% better price performance than GPU-based EC2 instances. With 16 Trainium2 chips and ultra-fast NeuronLink, they provide 20.8 petaflops of compute, ideal for training large models. 

For larger models, Trn2 UltraServers connect four Trn2 instances, enabling scale across 64 Trainium2 chips. These servers accelerate training, improve inference performance, and power trillion-parameter models. Project Rainier, a collaboration with Anthropic, will create the world’s largest AI compute cluster using these UltraServers. 

Trn2 instances are generally available today in the US East (Ohio) AWS Region, with availability in additional regions coming soon. Trn2 UltraServers are available in preview. 

Trainium3 chips—designed for high-performance needs of next frontier of generative AI workloads

AWS announced Trainium3, its next generation AI chip, that will allow customers to build bigger models faster and deliver superior real-time performance when deploying them. It will be the first AWS chip made with a 3-nanometer process node, setting a new standard for performance, power efficiency, and density. Trainium3-powered UltraServers are expected to be four times more performant than Trn2 UltraServers, allowing customers to iterate even faster when building models and deliver superior real-time performance when deploying them. The first Trainium3-based instances are expected to be available in late 2025. 

New database capabilities announced including Amazon Aurora DSQL—the fastest distributed SQL database 

AWS announced enhancements to Amazon Aurora and Amazon DynamoDB, offering strong consistency, low latency, and global scalability. 

  • Amazon Aurora DSQL: This serverless, distributed SQL database supports 99.999% multi-Region availability and delivers four times faster reads and writes compared to other distributed SQL databases. It eliminates trade-offs between low latency and SQL while providing microsecond sync accuracy. 
  • DynamoDB Enhancements: DynamoDB global tables now offer strong consistency, ensuring real-time access to the latest data without code changes. 

Both Aurora DSQL and DynamoDB enhancements are in preview. 

Introducing Amazon Nova 

Amazon Nova introduces a new generation of foundation models (FMs) capable of processing text, images, and videos. The Amazon Nova models available in Amazon Bedrock include: Amazon Nova Micro, Amazon Nova Lite, Amazon Nova Pro, Amazon Nova Premier, Amazon Nova Canvas, and Amazon Nova Reel – enable applications for multimedia understanding and generation. These models, available through Amazon Bedrock, are designed for speed, cost-efficiency, and ease of integration with customers’ systems. 

Amazon Q Developer reimagines how developers build and operate software with generative AI 

Amazon Q Developer enhancements leverage generative AI to improve software development and operations: 

  • Automated Unit Tests: Amazon Q Developer automates the creation of unit tests, reducing the burden on developers and ensuring complete test coverage with less effort. This helps developers deliver reliable code faster and avoid costly rollbacks. 
  • Documentation Updates: Automates the creation and updating of project documentation, ensuring accuracy and reducing the time developers spend on understanding code. This enables quicker onboarding and more meaningful contributions from team members. 
  • Code Reviews: Amazon Q Developer automates code reviews, providing quick feedback to help developers maintain quality, style, and security standards. This speeds up the review process, saving time and allowing developers to resolve issues earlier. 
  • Operational Issue Resolution: Operational teams can quickly identify and resolve issues across AWS environments with Amazon Q Developer by analyzing vast data points to uncover service relationships and anomalies. It provides actionable hypotheses and guides users through fixes, streamlining issue resolution and reducing downtime. 

These capabilities are now available in IDEs, AWS Management Console, and through GitLab integration. 

Next generation of Amazon SageMaker to deliver unified platform for data, analytics, and AI 

AWS CEO Matt Garman unveiled the next generation of Amazon SageMaker. The revamped Amazon SageMaker integrates analytics, machine learning, and generative AI into a unified platform: 

  • SageMaker Unified Studio: The new unified studio provides a single environment for accessing and acting on data, integrating AWS analytics, ML, and AI tools. Amazon Q Developer helps customers tackle various data use cases with the best tools for the job. 
  • SageMaker Catalog: Amazon SageMaker Catalog provides secure access to data, models, and artifacts, ensuring compliance and enterprise security. Built on Amazon DataZone, it offers governance tools like data classification and toxicity detection to safeguard AI applications. 
  • SageMaker Lakehouse: Amazon SageMaker Lakehouse unifies data across S3, data lakes, Redshift, and federated sources, simplifying analytics and ML tool usage. It supports Apache Iceberg for seamless data processing and offers fine-grained access controls for secure data sharing.  
  • Zero-ETL Integrations: AWS’s zero-ETL integrations with SaaS applications like Zendesk and SAP simplify data access for analytics and AI in SageMaker Lakehouse and Redshift. This eliminates the need for complex data pipelines, speeding up insights and reducing costs. 

The new SageMaker platform enhances collaboration, security, and efficiency for data and AI projects.

AWS strengthens Amazon Bedrock with industry-first AI safeguard, new agent capability, and model customization 

AWS CEO Matt Garman unveiled new Amazon Bedrock capabilities to address key challenges in deploying generative AI. These features tackle hallucination-induced errors, enable orchestration of AI agents for complex tasks, and support smaller, cost-efficient models that rival large models in performance: 

  • Prevent factual errors due to hallucinations: Generative AI models can produce “hallucinations,” limiting trust in critical industries. Amazon Bedrock’s Automated Reasoning checks prevent errors using logical reasoning, ensuring accurate, auditable, and policy-aligned responses via Bedrock Guardrails. 
  • Easily build and coordinate multiple agents to execute complex workflows: Amazon Bedrock Agents enable applications to execute tasks by leveraging AI-powered agents. AWS now supports multi-agent collaboration, allowing customers to coordinate specialized agents for complex workflows, such as financial analysis, across systems and data sources with ease. 
  • Create smaller, faster, more cost-effective models: Amazon Bedrock Model Distillation lets customers create smaller, efficient models by transferring knowledge from larger models, balancing performance, cost, and latency—ideal for real-time applications. It works with models from Anthropic, Meta, and Amazon Nova Models. 

Automated Reasoning checks, multi-agent collaboration, and Model Distillation are all available in preview. 

AWS re:Invent 2024 – Day 1 Recap

New generative AI enhancements for Amazon Connect 

New generative AI enhancements for Amazon Connect, AWS’s cloud contact center solution. Serving over 10 million daily interactions, Amazon Connect now offers: 

  • Automated segmentation for proactive, personalized communications. 
  • Amazon Q in Connect, a generative AI-powered assistant for dynamic self-service experiences. 
  • Customizable AI guardrails to ensure safe, policy-compliant AI deployments. 
  • Generative AI-driven insights like intelligent contact categorization and agent evaluations for better training and service quality. 

Leading organizations like Frontdoor, Fujitsu, and Priceline are already leveraging these innovations for enhanced customer service at reduced costs. 

These features are now generally available. Learn more about the AWS News Blog and AWS Contact Center Blog

AWS announces new data center components to support AI and improve energy efficiency 

AWS has unveiled advanced data center components to power the next generation of AI, enhance energy efficiency, and drive customer innovation. These upgrades address growing generative AI demands while improving sustainability. Key features include: 

  • Simplified designs to lower energy use and reduce failure risks. 
  • Cooling and control innovations, enabling 12% more compute power per site, reducing the number of data centers required. 
  • Sustainability upgrades, such as a cooling system cutting energy use by 46%, concrete with 35% lower embodied carbon, and backup generators running on renewable diesel, which reduces greenhouse gas emissions by up to 90%. 

These components, already in some AWS data centers, will be fully implemented in new U.S. facilities starting early 2025. Watch this video and read the press release to learn about AWS’s new data center components. 

Peter DeSantis shows how AWS is innovating across the entire technology stack 

At AWS re:Invent’s Monday Night Live, Peter DeSantis, SVP of AWS Utility Computing, explored the engineering behind AWS services and its role in advancing AI workloads. Joined by Dave Brown, VP of AWS Compute & Networking Services, and Tom Brown, co-founder of Anthropic, DeSantis showcased how AWS delivers performance, reliability, and cost-efficiency for AI. 

Highlights included innovations like the AWS Trainium2 chip, purpose-built for machine learning, and the Firefly Optic Plug, which speeds AI cluster deployment by pre-testing wiring. DeSantis emphasized AWS’s commitment to deep customer insights and fast, impactful decisions—like its pioneering investment in custom silicon 12 years ago. 

Calling this “the next chapter,” he detailed how AWS innovates across the tech stack to deliver differentiated solutions for the most demanding workloads. 

Schedule a meeting with our AWS cloud solution experts and accelerate your cloud journey with Idexcel.